Safe-FinRL: A Low Bias and Variance Deep Reinforcement Learning
Implementation for High-Freq Stock Trading
- URL: http://arxiv.org/abs/2206.05910v1
- Date: Mon, 13 Jun 2022 05:40:03 GMT
- Title: Safe-FinRL: A Low Bias and Variance Deep Reinforcement Learning
Implementation for High-Freq Stock Trading
- Authors: Zitao Song, Xuyang Jin, Chenliang Li
- Abstract summary: We propose Safe-FinRL, a novel DRL-based high-freq stock trading strategy enhanced by the near-stationary financial environment.
Our main contributions are firstly, we separate the long financial time series into the near-stationary short environment.
Secondly, we implement Trace-SAC in the near-stationary financial environment by incorporating the general retrace operator into the Soft Actor-Critic.
- Score: 26.217805781416764
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, many practitioners in quantitative finance have attempted to
use Deep Reinforcement Learning (DRL) to build better quantitative trading (QT)
strategies. Nevertheless, many existing studies fail to address several serious
challenges, such as the non-stationary financial environment and the bias and
variance trade-off when applying DRL in the real financial market. In this
work, we proposed Safe-FinRL, a novel DRL-based high-freq stock trading
strategy enhanced by the near-stationary financial environment and low bias and
variance estimation. Our main contributions are twofold: firstly, we separate
the long financial time series into the near-stationary short environment;
secondly, we implement Trace-SAC in the near-stationary financial environment
by incorporating the general retrace operator into the Soft Actor-Critic.
Extensive experiments on the cryptocurrency market have demonstrated that
Safe-FinRL has provided a stable value estimation and a steady policy
improvement and reduced bias and variance significantly in the near-stationary
financial environment.
Related papers
- Deep Reinforcement Learning Strategies in Finance: Insights into Asset Holding, Trading Behavior, and Purchase Diversity [0.0]
This paper investigates the tendencies towards holding or trading financial assets as well as purchase diversity of deep reinforcement learning (DRL) algorithms.
Our findings reveal that each DRL algorithm exhibits unique trading patterns and strategies, with A2C emerging as the top performer in terms of cumulative rewards.
arXiv Detail & Related papers (2024-06-29T20:56:58Z) - Deep Hedging with Market Impact [0.20482269513546458]
We propose a novel general market impact dynamic hedging model based on Deep Reinforcement Learning (DRL)
The optimal policy obtained from the DRL model is analysed using several option hedging simulations and compared to commonly used procedures such as delta hedging.
arXiv Detail & Related papers (2024-02-20T19:08:24Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach [29.706515133374193]
We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2022-06-07T08:58:00Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture
Fleeting Intraday Trading Opportunities [33.28409845878758]
We propose DeepScalper, a deep reinforcement learning framework for intraday trading.
We show that DeepScalper significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2021-12-15T15:24:02Z) - FinRL-Meta: A Universe of Near-Real Market Environments for Data-Driven
Deep Reinforcement Learning in Quantitative Finance [58.77314662664463]
FinRL-Meta builds a universe of market environments for data-driven financial reinforcement learning.
First, FinRL-Meta separates financial data processing from the design pipeline of DRL-based strategy.
Second, FinRL-Meta provides hundreds of market environments for various trading tasks.
arXiv Detail & Related papers (2021-12-13T16:03:37Z) - Deep Reinforcement Learning in Quantitative Algorithmic Trading: A
Review [0.0]
Deep Reinforcement Learning agents proved to be to a force to be reckon with in many games like Chess and Go.
This paper reviews the progress made so far with deep reinforcement learning in the subdomain of AI in finance.
We conclude that DRL in stock trading has showed huge applicability potential rivalling professional traders under strong assumptions.
arXiv Detail & Related papers (2021-05-31T22:26:43Z) - Conservative Q-Learning for Offline Reinforcement Learning [106.05582605650932]
We show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return.
We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees.
arXiv Detail & Related papers (2020-06-08T17:53:42Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.