Market Making with Deep Reinforcement Learning from Limit Order Books
- URL: http://arxiv.org/abs/2305.15821v1
- Date: Thu, 25 May 2023 08:05:19 GMT
- Title: Market Making with Deep Reinforcement Learning from Limit Order Books
- Authors: Hong Guo, Jianwu Lin and Fanlin Huang
- Abstract summary: This paper proposes a RL agent for market making with limit order book (LOB) data.
We leverage a neural network with convolutional filters and attention mechanism (Attn-LOB) for feature extraction.
We design a new continuous action space and a hybrid reward function for the MM task.
- Score: 2.569647910019739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Market making (MM) is an important research topic in quantitative finance,
the agent needs to continuously optimize ask and bid quotes to provide
liquidity and make profits. The limit order book (LOB) contains information on
all active limit orders, which is an essential basis for decision-making. The
modeling of evolving, high-dimensional and low signal-to-noise ratio LOB data
is a critical challenge. Traditional MM strategy relied on strong assumptions
such as price process, order arrival process, etc. Previous reinforcement
learning (RL) works handcrafted market features, which is insufficient to
represent the market. This paper proposes a RL agent for market making with LOB
data. We leverage a neural network with convolutional filters and attention
mechanism (Attn-LOB) for feature extraction from LOB. We design a new
continuous action space and a hybrid reward function for the MM task. Finally,
we conduct comprehensive experiments on latency and interpretability, showing
that our agent has good applicability.
Related papers
- MobiLlama: Towards Accurate and Lightweight Fully Transparent GPT [87.4910758026772]
"Bigger the better" has been the predominant trend in recent Large Language Models (LLMs) development.
This paper explores the "less is more" paradigm by addressing the challenge of designing accurate yet efficient Small Language Models (SLMs) for resource constrained devices.
arXiv Detail & Related papers (2024-02-26T18:59:03Z) - FinMem: A Performance-Enhanced LLM Trading Agent with Layered Memory and
Character Design [11.913409501633616]
textscFinMem is a novel LLM-based agent framework devised for financial decision-making.
textscFinMem's memory module aligns closely with the cognitive structure of human traders, offering robust interpretability.
This framework enables the agent to self-evolve its professional knowledge, react agilely to new investment cues, and continuously refine trading decisions.
arXiv Detail & Related papers (2023-11-23T00:24:40Z) - Many learning agents interacting with an agent-based market model [0.0]
We consider the dynamics of learning optimal execution trading agents interacting with a reactive Agent-Based Model.
The model represents a market ecology with 3-trophic levels represented by: optimal execution learning agents, minimally intelligent liquidity takers, and fast electronic liquidity providers.
We examine whether the inclusion of optimal execution agents that can learn is able to produce dynamics with the same complexity as empirical data.
arXiv Detail & Related papers (2023-03-13T18:15:52Z) - Neural Stochastic Agent-Based Limit Order Book Simulation: A Hybrid
Methodology [6.09170287691728]
Modern financial exchanges use an electronic limit order book (LOB) to store bid and ask orders for a specific financial asset.
We propose a novel hybrid LOB simulation paradigm characterised by: (1) representing the aggregation of market events' logic by a neural background trader that is pre-trained on historical LOB data through a neural point model; and (2) embedding the background trader in a multi-agent simulation with other trading agents.
We show that the stylised facts remain and we demonstrate order flow impact and financial herding behaviours that are in accordance with empirical observations of real markets.
arXiv Detail & Related papers (2023-02-28T20:53:39Z) - DSLOB: A Synthetic Limit Order Book Dataset for Benchmarking Forecasting
Algorithms under Distributional Shift [16.326002979578686]
In electronic trading markets, limit order books (LOBs) provide information about pending buy/sell orders at various price levels for a given security.
Recently, there has been a growing interest in using LOB data for resolving downstream machine learning tasks.
arXiv Detail & Related papers (2022-11-17T06:33:27Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Automated Machine Learning, Bounded Rationality, and Rational
Metareasoning [62.997667081978825]
We will look at automated machine learning (AutoML) and related problems from the perspective of bounded rationality.
Taking actions under bounded resources requires an agent to reflect on how to use these resources in an optimal way.
arXiv Detail & Related papers (2021-09-10T09:10:20Z) - The LOB Recreation Model: Predicting the Limit Order Book from TAQ
History Using an Ordinary Differential Equation Recurrent Neural Network [9.686252465354274]
We present the LOB recreation model, a first attempt from a deep learning perspective to recreate the top five price levels of the public limit order book (LOB) for small-tick stocks.
By the paradigm of transfer learning, the source model trained on one stock can be fine-tuned to enable application to other financial assets of the same class.
arXiv Detail & Related papers (2021-03-02T12:07:43Z) - Breaking the Curse of Many Agents: Provable Mean Embedding Q-Iteration
for Mean-Field Reinforcement Learning [135.64775986546505]
We exploit the symmetry of agents in multi-agent reinforcement learning (MARL)
We propose MF-FQI algorithm that solves the mean-field MARL and establishes a non-asymptotic analysis for MF-FQI algorithm.
We highlight that MF-FQI algorithm enjoys a "blessing of many agents" property in the sense that a larger number of observed agents improves the performance of MF-FQI algorithm.
arXiv Detail & Related papers (2020-06-21T21:45:50Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.