QLAMMP: A Q-Learning Agent for Optimizing Fees on Automated Market
Making Protocols
- URL: http://arxiv.org/abs/2211.14977v1
- Date: Mon, 28 Nov 2022 00:30:45 GMT
- Title: QLAMMP: A Q-Learning Agent for Optimizing Fees on Automated Market
Making Protocols
- Authors: Dev Churiwala, Bhaskar Krishnamachari
- Abstract summary: We develop a Q-Learning Agent for Market Making Protocols (QLAMMP) that learns the optimal fee rates and leverage coefficients for a given AMM protocol.
We show that QLAMMP is consistently able to outperform its static counterparts under all the simulated test conditions.
- Score: 5.672898304129217
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated Market Makers (AMMs) have cemented themselves as an integral part
of the decentralized finance (DeFi) space. AMMs are a type of exchange that
allows users to trade assets without the need for a centralized exchange. They
form the foundation for numerous decentralized exchanges (DEXs), which help
facilitate the quick and efficient exchange of on-chain tokens. All present-day
popular DEXs are static protocols, with fixed parameters controlling the fee
and the curvature - they suffer from invariance and cannot adapt to quickly
changing market conditions. This characteristic may cause traders to stay away
during high slippage conditions brought about by intractable market movements.
We propose an RL framework to optimize the fees collected on an AMM protocol.
In particular, we develop a Q-Learning Agent for Market Making Protocols
(QLAMMP) that learns the optimal fee rates and leverage coefficients for a
given AMM protocol and maximizes the expected fee collected under a range of
different market conditions. We show that QLAMMP is consistently able to
outperform its static counterparts under all the simulated test conditions.
Related papers
- From x*y=k to Uniswap Hooks; A Comparative Review of Decentralized Exchanges (DEX) [2.07180164747172]
This paper provides a comprehensive classification and comparative analyses of prominent DEX protocols, namely Uniswap, Curve, and Balancer.
The goals are to elucidate the strengths and limitations of different AMM models, highlight emerging concepts in DEX development, outline current challenges, and differentiate optimal models for specific applications.
arXiv Detail & Related papers (2024-10-14T05:10:56Z) - Quantifying Arbitrage in Automated Market Makers: An Empirical Study of Ethereum ZK Rollups [6.892626226074608]
This work systematically reviews arbitrage opportunities between Automated Market Makers (AMMs) on ZK rollups, and Centralised Exchanges (CEXs)
We propose a theoretical framework to measure such arbitrage opportunities and derive a formula for the related Maximal Arbitrage Value (MAV)
Overall, the cumulative MAV from July to 2023 on the USDC-ETH SyncSwap pool amounts to $104.96k (0.24% of trading volume)
arXiv Detail & Related papers (2024-03-24T10:26:34Z) - Many learning agents interacting with an agent-based market model [0.0]
We consider the dynamics of learning optimal execution trading agents interacting with a reactive Agent-Based Model.
The model represents a market ecology with 3-trophic levels represented by: optimal execution learning agents, minimally intelligent liquidity takers, and fast electronic liquidity providers.
We examine whether the inclusion of optimal execution agents that can learn is able to produce dynamics with the same complexity as empirical data.
arXiv Detail & Related papers (2023-03-13T18:15:52Z) - Uniswap Liquidity Provision: An Online Learning Approach [49.145538162253594]
Decentralized Exchanges (DEXs) are new types of marketplaces leveraging technology.
One such DEX, Uniswap v3, allows liquidity providers to allocate funds more efficiently by specifying an active price interval for their funds.
This introduces the problem of finding an optimal strategy for choosing price intervals.
We formalize this problem as an online learning problem with non-stochastic rewards.
arXiv Detail & Related papers (2023-02-01T17:21:40Z) - Predictive Crypto-Asset Automated Market Making Architecture for
Decentralized Finance using Deep Reinforcement Learning [0.0]
The study proposes a quote-driven predictive automated market maker (AMM) platform with on-chain custody and settlement functions.
The proposed architecture is an augmentation to the Uniswap V3, a cryptocurrency AMM protocol, by utilizing a novel market equilibrium pricing for reduced divergence and slippage loss.
arXiv Detail & Related papers (2022-09-28T01:13:22Z) - MA2QL: A Minimalist Approach to Fully Decentralized Multi-Agent
Reinforcement Learning [63.46052494151171]
We propose textitmulti-agent alternate Q-learning (MA2QL), where agents take turns to update their Q-functions by Q-learning.
We prove that when each agent guarantees a $varepsilon$-convergence at each turn, their joint policy converges to a Nash equilibrium.
Results show MA2QL consistently outperforms IQL, which verifies the effectiveness of MA2QL, despite such minimal changes.
arXiv Detail & Related papers (2022-09-17T04:54:32Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Weighted QMIX: Expanding Monotonic Value Function Factorisation for Deep
Multi-Agent Reinforcement Learning [66.94149388181343]
We present a new version of a popular $Q$-learning algorithm for MARL.
We show that it can recover the optimal policy even with access to $Q*$.
We also demonstrate improved performance on predator-prey and challenging multi-agent StarCraft benchmark tasks.
arXiv Detail & Related papers (2020-06-18T18:34:50Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z) - Monotonic Value Function Factorisation for Deep Multi-Agent
Reinforcement Learning [55.20040781688844]
QMIX is a novel value-based method that can train decentralised policies in a centralised end-to-end fashion.
We propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning.
arXiv Detail & Related papers (2020-03-19T16:51:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.