Deep Reinforcement Learning Approach for Trading Automation in The Stock
Market
- URL: http://arxiv.org/abs/2208.07165v1
- Date: Tue, 5 Jul 2022 11:34:29 GMT
- Title: Deep Reinforcement Learning Approach for Trading Automation in The Stock
Market
- Authors: Taylan Kabbani, Ekrem Duman
- Abstract summary: This paper presents a model to generate profitable trades in the stock market using Deep Reinforcement Learning (DRL) algorithms.
We formulate the trading problem as a Partially Observed Markov Decision Process (POMDP) model, considering the constraints imposed by the stock market.
We then solve the formulated POMDP problem using the Twin Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68 Sharpe Ratio on unseen data set.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep Reinforcement Learning (DRL) algorithms can scale to previously
intractable problems. The automation of profit generation in the stock market
is possible using DRL, by combining the financial assets price "prediction"
step and the "allocation" step of the portfolio in one unified process to
produce fully autonomous systems capable of interacting with their environment
to make optimal decisions through trial and error. This work represents a DRL
model to generate profitable trades in the stock market, effectively overcoming
the limitations of supervised learning approaches. We formulate the trading
problem as a Partially Observed Markov Decision Process (POMDP) model,
considering the constraints imposed by the stock market, such as liquidity and
transaction costs. We then solve the formulated POMDP problem using the Twin
Delayed Deep Deterministic Policy Gradient (TD3) algorithm reporting a 2.68
Sharpe Ratio on unseen data set (test data). From the point of view of stock
market forecasting and the intelligent decision-making mechanism, this paper
demonstrates the superiority of DRL in financial markets over other types of
machine learning and proves its credibility and advantages of strategic
decision-making.
Related papers
- Making Large Language Models Better Planners with Reasoning-Decision Alignment [70.5381163219608]
We motivate an end-to-end decision-making model based on multimodality-augmented LLM.
We propose a reasoning-decision alignment constraint between the paired CoTs and planning results.
We dub our proposed large language planners with reasoning-decision alignment as RDA-Driver.
arXiv Detail & Related papers (2024-08-25T16:43:47Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - MacroHFT: Memory Augmented Context-aware Reinforcement Learning On High Frequency Trading [20.3106468936159]
Reinforcement learning (RL) has become another appealing approach for high-frequency trading (HFT)
We propose a novel Memory Augmented Context-aware Reinforcement learning method On HFT, empha.k.a. MacroHFT.
We show that MacroHFT can achieve state-of-the-art performance on minute-level trading tasks.
arXiv Detail & Related papers (2024-06-20T17:48:24Z) - MOT: A Mixture of Actors Reinforcement Learning Method by Optimal Transport for Algorithmic Trading [6.305870529904885]
We propose MOT, which designs multiple actors with disentangled representation learning to model the different patterns of the market.
Experimental results on real futures market data demonstrate that MOT exhibits excellent profit capabilities while balancing risks.
arXiv Detail & Related papers (2024-06-03T01:42:52Z) - Optimizing Portfolio Management and Risk Assessment in Digital Assets
Using Deep Learning for Predictive Analysis [5.015409508372732]
This paper introduces the DQN algorithm into asset management portfolios in a novel and straightforward way.
The performance greatly exceeds the benchmark, which fully proves the effectiveness of the DRL algorithm in portfolio management.
Since different assets are trained separately as environments, there may be a phenomenon of Q value drift among different assets.
arXiv Detail & Related papers (2024-02-25T05:23:57Z) - Deep Hedging with Market Impact [0.20482269513546458]
We propose a novel general market impact dynamic hedging model based on Deep Reinforcement Learning (DRL)
The optimal policy obtained from the DRL model is analysed using several option hedging simulations and compared to commonly used procedures such as delta hedging.
arXiv Detail & Related papers (2024-02-20T19:08:24Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.