DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture
Fleeting Intraday Trading Opportunities
- URL: http://arxiv.org/abs/2201.09058v3
- Date: Sun, 21 Aug 2022 05:11:01 GMT
- Title: DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture
Fleeting Intraday Trading Opportunities
- Authors: Shuo Sun, Wanqi Xue, Rundong Wang, Xu He, Junlei Zhu, Jian Li, Bo An
- Abstract summary: We propose DeepScalper, a deep reinforcement learning framework for intraday trading.
We show that DeepScalper significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
- Score: 33.28409845878758
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reinforcement learning (RL) techniques have shown great success in many
challenging quantitative trading tasks, such as portfolio management and
algorithmic trading. Especially, intraday trading is one of the most profitable
and risky tasks because of the intraday behaviors of the financial market that
reflect billions of rapidly fluctuating capitals. However, a vast majority of
existing RL methods focus on the relatively low frequency trading scenarios
(e.g., day-level) and fail to capture the fleeting intraday investment
opportunities due to two major challenges: 1) how to effectively train
profitable RL agents for intraday investment decision-making, which involves
high-dimensional fine-grained action space; 2) how to learn meaningful
multi-modality market representation to understand the intraday behaviors of
the financial market at tick-level. Motivated by the efficient workflow of
professional human intraday traders, we propose DeepScalper, a deep
reinforcement learning framework for intraday trading to tackle the above
challenges. Specifically, DeepScalper includes four components: 1) a dueling
Q-network with action branching to deal with the large action space of intraday
trading for efficient RL optimization; 2) a novel reward function with a
hindsight bonus to encourage RL agents making trading decisions with a
long-term horizon of the entire trading day; 3) an encoder-decoder architecture
to learn multi-modality temporal market embedding, which incorporates both
macro-level and micro-level market information; 4) a risk-aware auxiliary task
to maintain a striking balance between maximizing profit and minimizing risk.
Through extensive experiments on real-world market data spanning over three
years on six financial futures, we demonstrate that DeepScalper significantly
outperforms many state-of-the-art baselines in terms of four financial
criteria.
Related papers
- Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework [0.0]
This study presents a Reinforcement Learning-based portfolio management model tailored for high-risk environments.
We implement the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention.
Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks.
arXiv Detail & Related papers (2024-08-09T23:36:58Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High Frequency Trading [20.3106468936159]
Reinforcement learning (RL) has become another appealing approach for high-frequency trading (HFT)
We propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, empha.k.a. MacroHFT.
We show that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks.
arXiv Detail & Related papers (2024-06-20T17:48:24Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - Mastering Pair Trading with Risk-Aware Recurrent Reinforcement Learning [10.566829415146426]
CREDIT is a risk-aware agent capable of learning to exploit long-term trading opportunities in pair trading similar to a human expert.
CREDIT is the first to apply bidirectional GRU along with the temporal attention mechanism to fully consider the temporal correlations embedded in the states.
It helps our agent to master pair trading with a robust trading preference that avoids risky trading with possible high returns and losses.
arXiv Detail & Related papers (2023-04-01T18:12:37Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach [29.706515133374193]
We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2022-06-07T08:58:00Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - A Deep Reinforcement Learning Framework for Continuous Intraday Market
Bidding [69.37299910149981]
A key component for the successful renewable energy sources integration is the usage of energy storage.
We propose a novel modelling framework for the strategic participation of energy storage in the European continuous intraday market.
An distributed version of the fitted Q algorithm is chosen for solving this problem due to its sample efficiency.
Results indicate that the agent converges to a policy that achieves in average higher total revenues than the benchmark strategy.
arXiv Detail & Related papers (2020-04-13T13:50:13Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.