Towards a fully RL-based Market Simulator
- URL: http://arxiv.org/abs/2110.06829v1
- Date: Wed, 13 Oct 2021 16:14:19 GMT
- Title: Towards a fully RL-based Market Simulator
- Authors: Leo Ardon, Nelson Vadori, Thomas Spooner, Mengda Xu, Jared Vann,
Sumitra Ganesh
- Abstract summary: We present a new financial framework where two families of RL-based agents learn simultaneously to satisfy their objective.
This is a step towards a fully RL-based market simulator replicating complex market conditions.
- Score: 4.648677931378919
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a new financial framework where two families of RL-based agents
representing the Liquidity Providers and Liquidity Takers learn simultaneously
to satisfy their objective. Thanks to a parametrized reward formulation and the
use of Deep RL, each group learns a shared policy able to generalize and
interpolate over a wide range of behaviors. This is a step towards a fully
RL-based market simulator replicating complex market conditions particularly
suited to study the dynamics of the financial market under various scenarios.
Related papers
- Deep Reinforcement Learning Agents for Strategic Production Policies in Microeconomic Market Simulations [1.6499388997661122]
We propose a DRL-based approach to obtain an effective policy in competitive markets with multiple producers.
Our framework enables agents to learn adaptive production policies to several simulations that consistently outperform static and random strategies.
The results show that agents trained with DRL can strategically adjust production levels to maximize long-term profitability.
arXiv Detail & Related papers (2024-10-27T18:38:05Z) - Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework [0.0]
This study presents a Reinforcement Learning-based portfolio management model tailored for high-risk environments.
We implement the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention.
Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks.
arXiv Detail & Related papers (2024-08-09T23:36:58Z) - Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management [1.2016264781280588]
A multi-agent reinforcement learning (RL) approach is proposed to balance the trade-off between the overall portfolio returns and their potential risks.
The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework.
arXiv Detail & Related papers (2024-02-01T11:31:26Z) - Towards Multi-Agent Reinforcement Learning driven Over-The-Counter
Market Simulations [16.48389671789281]
We study a game between liquidity provider and liquidity taker agents interacting in an over-the-counter market.
By playing against each other, our deep-reinforcement-learning-driven agents learn emergent behaviors.
We show convergence rates for our multi-agent policy gradient algorithm under a transitivity assumption.
arXiv Detail & Related papers (2022-10-13T17:06:08Z) - Multi-Asset Spot and Option Market Simulation [52.77024349608834]
We construct realistic spot and equity option market simulators for a single underlying on the basis of normalizing flows.
We leverage the conditional invertibility property of normalizing flows and introduce a scalable method to calibrate the joint distribution of a set of independent simulators.
arXiv Detail & Related papers (2021-12-13T17:34:28Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Scenic4RL: Programmatic Modeling and Generation of Reinforcement
Learning Environments [89.04823188871906]
Generation of diverse realistic scenarios is challenging for real-time strategy (RTS) environments.
Most of the existing simulators rely on randomly generating the environments.
We introduce the benefits of adopting an existing formal scenario specification language, SCENIC, to assist researchers.
arXiv Detail & Related papers (2021-06-18T21:49:46Z) - Distributed Reinforcement Learning for Cooperative Multi-Robot Object
Manipulation [53.262360083572005]
We consider solving a cooperative multi-robot object manipulation task using reinforcement learning (RL)
We propose two distributed multi-agent RL approaches: distributed approximate RL (DA-RL) and game-theoretic RL (GT-RL)
Although we focus on a small system of two agents in this paper, both DA-RL and GT-RL apply to general multi-agent systems, and are expected to scale well to large systems.
arXiv Detail & Related papers (2020-03-21T00:43:54Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.