Game Theoretic Liquidity Provisioning in Concentrated Liquidity Market Makers
- URL: http://arxiv.org/abs/2411.10399v1
- Date: Fri, 15 Nov 2024 18:09:43 GMT
- Title: Game Theoretic Liquidity Provisioning in Concentrated Liquidity Market Makers
- Authors: Weizhao Tang, Rachid El-Azouzi, Cheng Han Lee, Ethan Chan, Giulia Fanti,
- Abstract summary: We study the incentives of LPs in Concentrated liquidity market makers (CLMMs)
We show that in liquidity pools with risky assets, LPs adopt investment strategies far from the Nash equilibrium.
We show that across several pools, by updating their strategy to more closely match the Nash equilibrium of our game, LPs can improve their median daily returns by $116.
- Score: 8.56210578177444
- License:
- Abstract: Automated marker makers (AMMs) are a class of decentralized exchanges that enable the automated trading of digital assets. They accept deposits of digital tokens from liquidity providers (LPs); tokens can be used by traders to execute trades, which generate fees for the investing LPs. The distinguishing feature of AMMs is that trade prices are determined algorithmically, unlike classical limit order books. Concentrated liquidity market makers (CLMMs) are a major class of AMMs that offer liquidity providers flexibility to decide not only \emph{how much} liquidity to provide, but \emph{in what ranges of prices} they want the liquidity to be used. This flexibility can complicate strategic planning, since fee rewards are shared among LPs. We formulate and analyze a game theoretic model to study the incentives of LPs in CLMMs. Our main results show that while our original formulation admits multiple Nash equilibria and has complexity quadratic in the number of price ticks in the contract, it can be reduced to a game with a unique Nash equilibrium whose complexity is only linear. We further show that the Nash equilibrium of this simplified game follows a waterfilling strategy, in which low-budget LPs use up their full budget, but rich LPs do not. Finally, by fitting our game model to real-world CLMMs, we observe that in liquidity pools with risky assets, LPs adopt investment strategies far from the Nash equilibrium. Under price uncertainty, they generally invest in fewer and wider price ranges than our analysis suggests, with lower-frequency liquidity updates. We show that across several pools, by updating their strategy to more closely match the Nash equilibrium of our game, LPs can improve their median daily returns by \$116, which corresponds to an increase of 0.009\% in median daily return on investment.
Related papers
- Liquid Staking Tokens in Automated Market Makers [5.277756703318046]
We study liquid staking tokens (LSTs) on automated market makers (AMMs)
LSTs are tokenized representations of staked assets on proof-of-stake blockchains.
We find that while trading fees often compensate for impermanent loss, fully staking is more profitable for many pools.
arXiv Detail & Related papers (2024-03-15T11:53:46Z) - Optimistic Policy Gradient in Multi-Player Markov Games with a Single
Controller: Convergence Beyond the Minty Property [89.96815099996132]
We develop a new framework to characterize optimistic policy gradient methods in multi-player games with a single controller.
Our approach relies on a natural generalization of the classical Minty property that we introduce, which we anticipate to have further applications beyond Markov games.
arXiv Detail & Related papers (2023-12-19T11:34:10Z) - ZeroSwap: Data-driven Optimal Market Making in DeFi [23.671367118750872]
Automated Market Makers (AMMs) are major centers of matching liquidity supply and demand in Decentralized Finance.
We propose the first optimal Bayesian and the first model-free data-driven algorithm to optimally track the external price of the asset.
arXiv Detail & Related papers (2023-10-13T21:28:19Z) - A Black-box Approach for Non-stationary Multi-agent Reinforcement Learning [53.83345471268163]
We investigate learning the equilibria in non-stationary multi-agent systems.
We show how to test for various types of equilibria by a black-box reduction to single-agent learning.
arXiv Detail & Related papers (2023-06-12T23:48:24Z) - Breaking the Curse of Multiagents in a Large State Space: RL in Markov
Games with Independent Linear Function Approximation [56.715186432566576]
We propose a new model, independent linear Markov game, for reinforcement learning with a large state space and a large number of agents.
We design new algorithms for learning correlated equilibria (CCE) and Markov correlated equilibria (CE) with sample bounds complexity that only scalely with each agent's own function class complexity.
Our algorithms rely on two key technical innovations: (1) utilizing policy replay to tackle non-stationarity incurred by multiple agents and the use of function approximation; and (2) separating learning Markov equilibria and exploration in the Markov games.
arXiv Detail & Related papers (2023-02-07T18:47:48Z) - Uniswap Liquidity Provision: An Online Learning Approach [49.145538162253594]
Decentralized Exchanges (DEXs) are new types of marketplaces leveraging technology.
One such DEX, Uniswap v3, allows liquidity providers to allocate funds more efficiently by specifying an active price interval for their funds.
This introduces the problem of finding an optimal strategy for choosing price intervals.
We formalize this problem as an online learning problem with non-stochastic rewards.
arXiv Detail & Related papers (2023-02-01T17:21:40Z) - QLAMMP: A Q-Learning Agent for Optimizing Fees on Automated Market
Making Protocols [5.672898304129217]
We develop a Q-Learning Agent for Market Making Protocols (QLAMMP) that learns the optimal fee rates and leverage coefficients for a given AMM protocol.
We show that QLAMMP is consistently able to outperform its static counterparts under all the simulated test conditions.
arXiv Detail & Related papers (2022-11-28T00:30:45Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Can Reinforcement Learning Find Stackelberg-Nash Equilibria in
General-Sum Markov Games with Myopic Followers? [156.5760265539888]
We study multi-player general-sum Markov games with one of the players designated as the leader and the other players regarded as followers.
For such a game, our goal is to find a Stackelberg-Nash equilibrium (SNE), which is a policy pair $(pi*, nu*)$.
We develop sample-efficient reinforcement learning (RL) algorithms for solving for an SNE in both online and offline settings.
arXiv Detail & Related papers (2021-12-27T05:41:14Z) - Allocating Indivisible Goods to Strategic Agents: Pure Nash Equilibria
and Fairness [16.187873844872637]
We consider the problem of fairly allocating a set of indivisible goods to a set of strategic agents with additive valuation functions.
Our main goal is to explore whether there exist mechanisms that have pure Nash equilibria for every instance.
We show that the corresponding allocations not only are EFX but also satisfy maximin share fairness, something that is not true for this algorithm in the non-strategic setting!
arXiv Detail & Related papers (2021-09-17T16:57:20Z) - Strategic Liquidity Provision in Uniswap v3 [13.436603092715247]
A liquidity provider (LP) allocates liquidity to one or more closed intervals of the price of an asset.
We formalize the dynamic liquidity provision problem and focus on a general class of strategies for which we provide a neural network-based optimization framework.
arXiv Detail & Related papers (2021-06-22T19:48:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.