Applications of Reinforcement Learning in Finance -- Trading with a
Double Deep Q-Network
- URL: http://arxiv.org/abs/2206.14267v1
- Date: Tue, 28 Jun 2022 19:46:16 GMT
- Title: Applications of Reinforcement Learning in Finance -- Trading with a
Double Deep Q-Network
- Authors: Frensi Zejnullahu, Maurice Moser, Joerg Osterrieder
- Abstract summary: This paper presents a Double Deep Q-Network algorithm for trading single assets, namely the E-mini S&P 500 continuous futures contract.
We use a proven setup as the foundation for our environment with multiple extensions.
The features of our trading agent are constantly being expanded to include additional assets such as commodities, resulting in four models.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper presents a Double Deep Q-Network algorithm for trading single
assets, namely the E-mini S&P 500 continuous futures contract. We use a proven
setup as the foundation for our environment with multiple extensions. The
features of our trading agent are constantly being expanded to include
additional assets such as commodities, resulting in four models. We also
respond to environmental conditions, including costs and crises. Our trading
agent is first trained for a specific time period and tested on new data and
compared with the long-and-hold strategy as a benchmark (market). We analyze
the differences between the various models and the in-sample/out-of-sample
performance with respect to the environment. The experimental results show that
the trading agent follows an appropriate behavior. It can adjust its policy to
different circumstances, such as more extensive use of the neutral position
when trading costs are present. Furthermore, the net asset value exceeded that
of the benchmark, and the agent outperformed the market in the test set. We
provide initial insights into the behavior of an agent in a financial domain
using a DDQN algorithm. The results of this study can be used for further
development.
Related papers
- When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Optimizing Portfolio Management and Risk Assessment in Digital Assets
Using Deep Learning for Predictive Analysis [5.015409508372732]
This paper introduces the DQN algorithm into asset management portfolios in a novel and straightforward way.
The performance greatly exceeds the benchmark, which fully proves the effectiveness of the DRL algorithm in portfolio management.
Since different assets are trained separately as environments, there may be a phenomenon of Q value drift among different assets.
arXiv Detail & Related papers (2024-02-25T05:23:57Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - HireVAE: An Online and Adaptive Factor Model Based on Hierarchical and
Regime-Switch VAE [113.47287249524008]
It is still an open question to build a factor model that can conduct stock prediction in an online and adaptive setting.
We propose the first deep learning based online and adaptive factor model, HireVAE, at the core of which is a hierarchical latent space that embeds the relationship between the market situation and stock-wise latent factors.
Across four commonly used real stock market benchmarks, the proposed HireVAE demonstrate superior performance in terms of active returns over previous methods.
arXiv Detail & Related papers (2023-06-05T12:58:13Z) - Joint Latent Topic Discovery and Expectation Modeling for Financial
Markets [45.758436505779386]
We present a groundbreaking framework for financial market analysis.
This approach is the first to jointly model investor expectations and automatically mine latent stock relationships.
Our model consistently achieves an annual return exceeding 10%.
arXiv Detail & Related papers (2023-06-01T01:36:51Z) - Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach [29.706515133374193]
We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2022-06-07T08:58:00Z) - High-Dimensional Stock Portfolio Trading with Deep Reinforcement
Learning [0.0]
The algorithm is capable of trading high-dimensional portfolios from cross-sectional datasets of any size.
We sequentially set up environments by sampling one asset for each environment while rewarding investments with the resulting asset's return and cash reservation with the average return of the set of assets.
arXiv Detail & Related papers (2021-12-09T08:30:45Z) - Deep Reinforcement Learning for Active High Frequency Trading [1.6874375111244329]
We introduce the first end-to-end Deep Reinforcement Learning (DRL) based framework for active high frequency trading in the stock market.
We train DRL agents to trade one unit of Intel Corporation stock by employing the Proximal Policy Optimization algorithm.
arXiv Detail & Related papers (2021-01-18T15:09:28Z) - MoTiAC: Multi-Objective Actor-Critics for Real-Time Bidding [47.555870679348416]
We propose a Multi-ecTive Actor-Critics algorithm named MoTiAC for the problem of bidding optimization with various goals.
Unlike previous RL models, the proposed MoTiAC can simultaneously fulfill multi-objective tasks in complicated bidding environments.
arXiv Detail & Related papers (2020-02-18T07:16:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.