ZeroSwap: Data-driven Optimal Market Making in DeFi
- URL: http://arxiv.org/abs/2310.09413v3
- Date: Mon, 29 Apr 2024 15:08:17 GMT
- Title: ZeroSwap: Data-driven Optimal Market Making in DeFi
- Authors: Viraj Nadkarni, Jiachen Hu, Ranvir Rana, Chi Jin, Sanjeev Kulkarni, Pramod Viswanath,
- Abstract summary: Automated Market Makers (AMMs) are major centers of matching liquidity supply and demand in Decentralized Finance.
We propose the first optimal Bayesian and the first model-free data-driven algorithm to optimally track the external price of the asset.
- Score: 23.671367118750872
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated Market Makers (AMMs) are major centers of matching liquidity supply and demand in Decentralized Finance. Their functioning relies primarily on the presence of liquidity providers (LPs) incentivized to invest their assets into a liquidity pool. However, the prices at which a pooled asset is traded is often more stale than the prices on centralized and more liquid exchanges. This leads to the LPs suffering losses to arbitrage. This problem is addressed by adapting market prices to trader behavior, captured via the classical market microstructure model of Glosten and Milgrom. In this paper, we propose the first optimal Bayesian and the first model-free data-driven algorithm to optimally track the external price of the asset. The notion of optimality that we use enforces a zero-profit condition on the prices of the market maker, hence the name ZeroSwap. This ensures that the market maker balances losses to informed traders with profits from noise traders. The key property of our approach is the ability to estimate the external market price without the need for price oracles or loss oracles. Our theoretical guarantees on the performance of both these algorithms, ensuring the stability and convergence of their price recommendations, are of independent interest in the theory of reinforcement learning. We empirically demonstrate the robustness of our algorithms to changing market conditions.
Related papers
- Reinforcement Learning for Corporate Bond Trading: A Sell Side Perspective [0.0]
A corporate bond trader provides a quote by adding a spread over a textitprevalent market price
For illiquid bonds, the market price is harder to observe, and traders often resort to available benchmark bond prices.
In this paper, we approach the estimation of an optimal bid-ask spread quoting strategy in a data driven manner and show that it can be learned using Reinforcement Learning.
arXiv Detail & Related papers (2024-06-18T18:02:35Z) - Adaptive Liquidity Provision in Uniswap V3 with Deep Reinforcement
Learning [19.916721360624997]
Decentralized exchanges (DEXs) are a cornerstone of decentralized finance (DeFi)
This paper introduces a deep reinforcement learning (DRL) solution designed to adaptively adjust price ranges.
Our approach also neutralizes price-change risks by hedging the liquidity position through a rebalancing portfolio.
arXiv Detail & Related papers (2023-09-18T20:10:28Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - UAMM: Price-oracle based Automated Market Maker [42.32743590150279]
We propose a new approach known as UBET AMM, which calculates prices by considering external market prices and the impermanent loss of the liquidity pool.
We demonstrate that our approach eliminates arbitrage opportunities when external market prices are efficient.
arXiv Detail & Related papers (2023-08-11T20:17:22Z) - Deep Policy Gradient Methods in Commodity Markets [0.0]
Traders play an important role in stabilizing markets by providing liquidity and reducing volatility.
This thesis investigates the effectiveness of deep reinforcement learning methods in commodities trading.
arXiv Detail & Related papers (2023-06-14T11:50:23Z) - Equilibrium of Data Markets with Externality [5.383900608313559]
We model real-world data markets, where sellers post fixed prices and buyers are free to purchase from any set of sellers.
A key component here is the negative externality buyers induce on one another due to data purchases.
We prove that platforms intervening through a transaction cost can lead to a pure equilibrium with strong welfare guarantees.
arXiv Detail & Related papers (2023-02-16T00:57:49Z) - Uniswap Liquidity Provision: An Online Learning Approach [49.145538162253594]
Decentralized Exchanges (DEXs) are new types of marketplaces leveraging technology.
One such DEX, Uniswap v3, allows liquidity providers to allocate funds more efficiently by specifying an active price interval for their funds.
This introduces the problem of finding an optimal strategy for choosing price intervals.
We formalize this problem as an online learning problem with non-stochastic rewards.
arXiv Detail & Related papers (2023-02-01T17:21:40Z) - Quantum computational finance: martingale asset pricing for incomplete
markets [69.73491758935712]
We show that a variety of quantum techniques can be applied to the pricing problem in finance.
We discuss three different methods that are distinct from previous works.
arXiv Detail & Related papers (2022-09-19T09:22:01Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.