A simple learning agent interacting with an agent-based market model
- URL: http://arxiv.org/abs/2208.10434v4
- Date: Sat, 11 Nov 2023 18:23:26 GMT
- Title: A simple learning agent interacting with an agent-based market model
- Authors: Matthew Dicks, Andrew Paskaramoorthy, Tim Gebbie
- Abstract summary: We consider the learning dynamics of a single reinforcement learning optimal execution trading agent when it interacts with an agent-based financial market model.
We find that the moments of the model are robust to the impact of the learning agents except for the Hurst exponent.
The introduction of the learning agent preserves the shape of the price impact curves but can reduce the trade-sign auto-correlations when their trading volumes increase.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the learning dynamics of a single reinforcement learning optimal
execution trading agent when it interacts with an event driven agent-based
financial market model. Trading takes place asynchronously through a matching
engine in event time. The optimal execution agent is considered at different
levels of initial order-sizes and differently sized state spaces. The resulting
impact on the agent-based model and market are considered using a calibration
approach that explores changes in the empirical stylised facts and price impact
curves. Convergence, volume trajectory and action trace plots are used to
visualise the learning dynamics. Here the smaller state space agents had the
number of states they visited converge much faster than the larger state space
agents, and they were able to start learning to trade intuitively using the
spread and volume states. We find that the moments of the model are robust to
the impact of the learning agents except for the Hurst exponent, which was
lowered by the introduction of strategic order-splitting. The introduction of
the learning agent preserves the shape of the price impact curves but can
reduce the trade-sign auto-correlations when their trading volumes increase.
Related papers
- MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading [6.305870529904885]
We propose MOT, which designs multiple actors with disentangled representation learning to model the different patterns of the market.
Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks.
arXiv Detail & Related papers (2024-06-03T01:42:52Z) - An Auction-based Marketplace for Model Trading in Federated Learning [54.79736037670377]
Federated learning (FL) is increasingly recognized for its efficacy in training models using locally distributed data.
We frame FL as a marketplace of models, where clients act as both buyers and sellers.
We propose an auction-based solution to ensure proper pricing based on performance gain.
arXiv Detail & Related papers (2024-02-02T07:25:53Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Many learning agents interacting with an agent-based market model [0.0]
We consider the dynamics of learning optimal execution trading agents interacting with a reactive Agent-Based Model.
The model represents a market ecology with 3-trophic levels represented by: optimal execution learning agents, minimally intelligent liquidity takers, and fast electronic liquidity providers.
We examine whether the inclusion of optimal execution agents that can learn is able to produce dynamics with the same complexity as empirical data.
arXiv Detail & Related papers (2023-03-13T18:15:52Z) - Decentralized scheduling through an adaptive, trading-based multi-agent
system [1.7403133838762448]
In multi-agent reinforcement learning systems, the actions of one agent can have a negative impact on the rewards of other agents.
This work applies a trading approach to a simulated scheduling environment, where the agents are responsible for the assignment of incoming jobs to compute cores.
The agents can trade the usage right of computational cores to process high-priority, high-reward jobs faster than low-priority, low-reward jobs.
arXiv Detail & Related papers (2022-07-05T13:50:18Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Reinforcement Learning for Systematic FX Trading [0.0]
We conduct a detailed experiment on major cash pairs, accurately accounting for transaction and funding costs.
These sources of profit and loss, including the price trends that occur in the currency markets, are made available to our recurrent reinforcement learner.
This is despite forcing the model to trade at the close of the trading day 5pm EST, when trading costs are statistically the most expensive.
arXiv Detail & Related papers (2021-10-10T09:44:29Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z) - What can I do here? A Theory of Affordances in Reinforcement Learning [65.70524105802156]
We develop a theory of affordances for agents who learn and plan in Markov Decision Processes.
Affordances play a dual role in this case, by reducing the number of actions available in any given situation.
We propose an approach to learn affordances and use it to estimate transition models that are simpler and generalize better.
arXiv Detail & Related papers (2020-06-26T16:34:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.