Many learning agents interacting with an agent-based market model
- URL: http://arxiv.org/abs/2303.07393v3
- Date: Sun, 19 Nov 2023 17:07:39 GMT
- Title: Many learning agents interacting with an agent-based market model
- Authors: Matthew Dicks, Andrew Paskaramoorthy, Tim Gebbie
- Abstract summary: We consider the dynamics of learning optimal execution trading agents interacting with a reactive Agent-Based Model.
The model represents a market ecology with 3-trophic levels represented by: optimal execution learning agents, minimally intelligent liquidity takers, and fast electronic liquidity providers.
We examine whether the inclusion of optimal execution agents that can learn is able to produce dynamics with the same complexity as empirical data.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: We consider the dynamics and the interactions of multiple reinforcement
learning optimal execution trading agents interacting with a reactive
Agent-Based Model (ABM) of a financial market in event time. The model
represents a market ecology with 3-trophic levels represented by: optimal
execution learning agents, minimally intelligent liquidity takers, and fast
electronic liquidity providers. The optimal execution agent classes include
buying and selling agents that can either use a combination of limit orders and
market orders, or only trade using market orders. The reward function
explicitly balances trade execution slippage against the penalty of not
executing the order timeously. This work demonstrates how multiple competing
learning agents impact a minimally intelligent market simulation as functions
of the number of agents, the size of agents' initial orders, and the state
spaces used for learning. We use phase space plots to examine the dynamics of
the ABM, when various specifications of learning agents are included. Further,
we examine whether the inclusion of optimal execution agents that can learn is
able to produce dynamics with the same complexity as empirical data. We find
that the inclusion of optimal execution agents changes the stylised facts
produced by ABM to conform more with empirical data, and are a necessary
inclusion for ABMs investigating market micro-structure. However, including
execution agents to chartist-fundamentalist-noise ABMs is insufficient to
recover the complexity observed in empirical data.
Related papers
- Watch Every Step! LLM Agent Learning via Iterative Step-Level Process Refinement [50.481380478458945]
Iterative step-level Process Refinement (IPR) framework provides detailed step-by-step guidance to enhance agent training.
Our experiments on three complex agent tasks demonstrate that our framework outperforms a variety of strong baselines.
arXiv Detail & Related papers (2024-06-17T03:29:13Z) - MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading [6.305870529904885]
We propose MOT, which designs multiple actors with disentangled representation learning to model the different patterns of the market.
Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks.
arXiv Detail & Related papers (2024-06-03T01:42:52Z) - EconAgent: Large Language Model-Empowered Agents for Simulating Macroeconomic Activities [43.70290385026672]
We introduce EconAgent, a large language model-empowered agent with human-like characteristics for macroeconomic simulation.
We first construct a simulation environment that incorporates various market dynamics driven by agents' decisions.
Through the perception module, we create heterogeneous agents with distinct decision-making mechanisms.
arXiv Detail & Related papers (2023-10-16T14:19:40Z) - Learning Multi-Agent Intention-Aware Communication for Optimal
Multi-Order Execution in Finance [96.73189436721465]
We first present a multi-agent RL (MARL) method for multi-order execution considering practical constraints.
We propose a learnable multi-round communication protocol, for the agents communicating the intended actions with each other.
Experiments on the data from two real-world markets have illustrated superior performance with significantly better collaboration effectiveness.
arXiv Detail & Related papers (2023-07-06T16:45:40Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - A simple learning agent interacting with an agent-based market model [0.0]
We consider the learning dynamics of a single reinforcement learning optimal execution trading agent when it interacts with an agent-based financial market model.
We find that the moments of the model are robust to the impact of the learning agents except for the Hurst exponent.
The introduction of the learning agent preserves the shape of the price impact curves but can reduce the trade-sign auto-correlations when their trading volumes increase.
arXiv Detail & Related papers (2022-08-22T16:42:06Z) - A Modular Framework for Reinforcement Learning Optimal Execution [68.8204255655161]
We develop a modular framework for the application of Reinforcement Learning to the problem of Optimal Trade Execution.
The framework is designed with flexibility in mind, in order to ease the implementation of different simulation setups.
arXiv Detail & Related papers (2022-08-11T09:40:42Z) - Finding General Equilibria in Many-Agent Economic Simulations Using Deep
Reinforcement Learning [72.23843557783533]
We show that deep reinforcement learning can discover stable solutions that are epsilon-Nash equilibria for a meta-game over agent types.
Our approach is more flexible and does not need unrealistic assumptions, e.g., market clearing.
We demonstrate our approach in real-business-cycle models, a representative family of DGE models, with 100 worker-consumers, 10 firms, and a government who taxes and redistributes.
arXiv Detail & Related papers (2022-01-03T17:00:17Z) - Multi-Agent Imitation Learning with Copulas [102.27052968901894]
Multi-agent imitation learning aims to train multiple agents to perform tasks from demonstrations by learning a mapping between observations and actions.
In this paper, we propose to use copula, a powerful statistical tool for capturing dependence among random variables, to explicitly model the correlation and coordination in multi-agent systems.
Our proposed model is able to separately learn marginals that capture the local behavioral patterns of each individual agent, as well as a copula function that solely and fully captures the dependence structure among agents.
arXiv Detail & Related papers (2021-07-10T03:49:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.