Modelling crypto markets by multi-agent reinforcement learning
- URL: http://arxiv.org/abs/2402.10803v1
- Date: Fri, 16 Feb 2024 16:28:58 GMT
- Title: Modelling crypto markets by multi-agent reinforcement learning
- Authors: Johann Lussange, Stefano Vrizzi, Stefano Palminteri, Boris Gutkin
- Abstract summary: This study introduces a multi-agent reinforcement learning (MARL) model simulating crypto markets.
It is calibrated to the crypto's daily closing prices of $153$ cryptocurrencies that were continuously traded between 2018 and 2022.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Building on a previous foundation work (Lussange et al. 2020), this study
introduces a multi-agent reinforcement learning (MARL) model simulating crypto
markets, which is calibrated to the Binance's daily closing prices of $153$
cryptocurrencies that were continuously traded between 2018 and 2022. Unlike
previous agent-based models (ABM) or multi-agent systems (MAS) which relied on
zero-intelligence agents or single autonomous agent methodologies, our approach
relies on endowing agents with reinforcement learning (RL) techniques in order
to model crypto markets. This integration is designed to emulate, with a
bottom-up approach to complexity inference, both individual and collective
agents, ensuring robustness in the recent volatile conditions of such markets
and during the COVID-19 era. A key feature of our model also lies in the fact
that its autonomous agents perform asset price valuation based on two sources
of information: the market prices themselves, and the approximation of the
crypto assets fundamental values beyond what those market prices are. Our MAS
calibration against real market data allows for an accurate emulation of crypto
markets microstructure and probing key market behaviors, in both the bearish
and bullish regimes of that particular time period.
Related papers
- Cryptocurrency Price Forecasting Using XGBoost Regressor and Technical Indicators [2.038893829552158]
This study introduces a machine learning approach to predict cryptocurrency prices.
We make use of important technical indicators such as Exponential Moving Average (EMA) and Moving Average Convergence Divergence (MACD) to train and feed the XGBoost regressor model.
We evaluate the model's performance through various simulations, showing promising results.
arXiv Detail & Related papers (2024-07-16T14:41:27Z) - Modelling Opaque Bilateral Market Dynamics in Financial Trading: Insights from a Multi-Agent Simulation Study [15.379345372327375]
This paper aims to represent the opaque bilateral market for Australian government bond trading.
The uniqueness of the bilateral market, characterized by negotiated transactions and a limited number of agents, yields valuable insights for agent-based modelling and quantitative finance.
We explore the implications of market rigidity on market structure and consider the element of stability, in market design.
arXiv Detail & Related papers (2024-05-05T08:42:20Z) - A Network Simulation of OTC Markets with Multiple Agents [3.8944986367855963]
We present a novel approach to simulating an over-the-counter (OTC) financial market in which trades are intermediated solely by market makers.
We show that our network-based model can lend insights into the effect of market-structure on price-action.
arXiv Detail & Related papers (2024-05-03T20:45:00Z) - Joint Latent Topic Discovery and Expectation Modeling for Financial
Markets [45.758436505779386]
We present a groundbreaking framework for financial market analysis.
This approach is the first to jointly model investor expectations and automatically mine latent stock relationships.
Our model consistently achieves an annual return exceeding 10%.
arXiv Detail & Related papers (2023-06-01T01:36:51Z) - Regime-based Implied Stochastic Volatility Model for Crypto Option
Pricing [0.0]
Existing methodologies fail to cope with the volatile nature of the emerging Digital Assets (DAs)
We leverage recent advances in market regime (MR) clustering with the Implied volatility Model (ISVM)
ISVM can incorporate investor expectations in each of the sentiment-driven periods by using implied volatility (IV) data.
We demonstrate that MR-ISVM contributes to overcome the burden of complex adaption to jumps in higher order characteristics of option pricing models.
arXiv Detail & Related papers (2022-08-15T15:31:42Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning [8.431365407963629]
This study proposes a framework for automatic high-frequency bitcoin transactions based on a deep reinforcement learning algorithm-proximal policy optimization (PPO)
The proposed framework can earn excess returns through both the period of volatility and surge, which opens the door to research on building a single cryptocurrency trading strategy based on deep learning.
arXiv Detail & Related papers (2021-09-30T01:24:03Z) - OSOUM Framework for Trading Data Research [79.0383470835073]
We supply, to the best of our knowledge, the first open source simulation platform, Open SOUrce Market Simulator (OSOUM) to analyze trading markets and specifically data markets.
We describe and implement a specific data market model, consisting of two types of agents: sellers who own various datasets available for acquisition, and buyers searching for relevant and beneficial datasets for purchase.
Although commercial frameworks, intended for handling data markets, already exist, we provide a free and extensive end-to-end research tool for simulating possible behavior for both buyers and sellers participating in (data) markets.
arXiv Detail & Related papers (2021-02-18T09:20:26Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.