Adaptive Liquidity Provision in Uniswap V3 with Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2309.10129v1
- Date: Mon, 18 Sep 2023 20:10:28 GMT
- Title: Adaptive Liquidity Provision in Uniswap V3 with Deep Reinforcement
Learning
- Authors: Haochen Zhang and Xi Chen and Lin F. Yang
- Abstract summary: Decentralized exchanges (DEXs) are a cornerstone of decentralized finance (DeFi)
This paper introduces a deep reinforcement learning (DRL) solution designed to adaptively adjust price ranges.
Our approach also neutralizes price-change risks by hedging the liquidity position through a rebalancing portfolio.
- Score: 19.916721360624997
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized exchanges (DEXs) are a cornerstone of decentralized finance
(DeFi), allowing users to trade cryptocurrencies without the need for
third-party authorization. Investors are incentivized to deposit assets into
liquidity pools, against which users can trade directly, while paying fees to
liquidity providers (LPs). However, a number of unresolved issues related to
capital efficiency and market risk hinder DeFi's further development. Uniswap
V3, a leading and groundbreaking DEX project, addresses capital efficiency by
enabling LPs to concentrate their liquidity within specific price ranges for
deposited assets. Nevertheless, this approach exacerbates market risk, as LPs
earn trading fees only when asset prices are within these predetermined
brackets. To mitigate this issue, this paper introduces a deep reinforcement
learning (DRL) solution designed to adaptively adjust these price ranges,
maximizing profits and mitigating market risks. Our approach also neutralizes
price-change risks by hedging the liquidity position through a rebalancing
portfolio in a centralized futures exchange. The DRL policy aims to optimize
trading fees earned by LPs against associated costs, such as gas fees and
hedging expenses, which is referred to as loss-versus-rebalancing (LVR). Using
simulations with a profit-and-loss (PnL) benchmark, our method demonstrates
superior performance in ETH/USDC and ETH/USDT pools compared to existing
baselines. We believe that this strategy not only offers investors a valuable
asset management tool but also introduces a new incentive mechanism for DEX
designers.
Related papers
- When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Reinforcement Learning with Maskable Stock Representation for Portfolio
Management in Customizable Stock Pools [34.97636568457075]
Portfolio management (PM) is a fundamental financial trading task, which explores the optimal periodical reallocation of capitals into different stocks to pursue long-term profits.
ExistingReinforcement learning (RL) methods require to retrain RL agents even with a tiny change of the stock pool, which leads to high computational cost and unstable performance.
We propose EarnMore to handle PM with CSPs through one-shot training in a global stock pool.
arXiv Detail & Related papers (2023-11-17T09:16:59Z) - ZeroSwap: Data-driven Optimal Market Making in DeFi [23.671367118750872]
Automated Market Makers (AMMs) are major centers of matching liquidity supply and demand in Decentralized Finance.
We propose the first optimal Bayesian and the first model-free data-driven algorithm to optimally track the external price of the asset.
arXiv Detail & Related papers (2023-10-13T21:28:19Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Uniswap Liquidity Provision: An Online Learning Approach [49.145538162253594]
Decentralized Exchanges (DEXs) are new types of marketplaces leveraging technology.
One such DEX, Uniswap v3, allows liquidity providers to allocate funds more efficiently by specifying an active price interval for their funds.
This introduces the problem of finding an optimal strategy for choosing price intervals.
We formalize this problem as an online learning problem with non-stochastic rewards.
arXiv Detail & Related papers (2023-02-01T17:21:40Z) - Reinforcement learning for options on target volatility funds [0.0]
We deal with the funding costs rising from hedging the risky securities underlying a target volatility strategy (TVS)
We derive an analytical solution of the problem in the Black and Scholes (BS) scenario.
Then we use Reinforcement Learning (RL) techniques to determine the fund composition leading to the most conservative price under the local volatility (LV) model.
arXiv Detail & Related papers (2021-12-03T10:55:11Z) - Strategic Liquidity Provision in Uniswap v3 [13.436603092715247]
A liquidity provider (LP) allocates liquidity to one or more closed intervals of the price of an asset.
We formalize the dynamic liquidity provision problem and focus on a general class of strategies for which we provide a neural network-based optimization framework.
arXiv Detail & Related papers (2021-06-22T19:48:02Z) - Regulation conform DLT-operable payment adapter based on trustless -
justified trust combined generalized state channels [77.34726150561087]
Economy of Things (EoT) will be based on software agents running on peer-to-peer trustless networks.
We give an overview of current solutions that differ in their fundamental values and technological possibilities.
We propose to combine the strengths of the crypto based, decentralized trustless elements with established and well regulated means of payment.
arXiv Detail & Related papers (2020-07-03T10:45:55Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.