Using Reinforcement Learning in the Algorithmic Trading Problem
- URL: http://arxiv.org/abs/2002.11523v1
- Date: Wed, 26 Feb 2020 14:30:18 GMT
- Title: Using Reinforcement Learning in the Algorithmic Trading Problem
- Authors: Evgeny Ponomarev, Ivan Oseledets, Andrzej Cichocki
- Abstract summary: Trading on the stock exchange is interpreted into a game with a Markov property consisting of states, actions, and rewards.
A system for trading the fixed volume of a financial instrument is proposed and experimentally tested.
- Score: 18.21650781888097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The development of reinforced learning methods has extended application to
many areas including algorithmic trading. In this paper trading on the stock
exchange is interpreted into a game with a Markov property consisting of
states, actions, and rewards. A system for trading the fixed volume of a
financial instrument is proposed and experimentally tested; this is based on
the asynchronous advantage actor-critic method with the use of several neural
network architectures. The application of recurrent layers in this approach is
investigated. The experiments were performed on real anonymized data. The best
architecture demonstrated a trading strategy for the RTS Index futures
(MOEX:RTSI) with a profitability of 66% per annum accounting for commission.
The project source code is available via the following link:
http://github.com/evgps/a3c_trading.
Related papers
- A Deep Reinforcement Learning Framework For Financial Portfolio Management [3.186092314772714]
It is a portfolio management problem which is solved by deep learning techniques.
Three different instants are used to realize this framework, namely a Convolutional Neural Network (CNN), a basic Recurrent Neural Network (RNN), and a Long Short-Term Memory.
We have successfully replicated the original paper, which achieve superior returns, but it doesn't perform as well when applied in the stock market.
arXiv Detail & Related papers (2024-09-03T20:11:04Z) - AI-Powered Energy Algorithmic Trading: Integrating Hidden Markov Models with Neural Networks [0.0]
This study introduces a new approach that combines Hidden Markov Models (HMM) and neural networks, integrated with Black-Litterman portfolio optimization.
During the COVID period ( 2019-2022), this dual-model approach achieved a 83% return with a Sharpe ratio of 0.77.
arXiv Detail & Related papers (2024-07-29T10:26:52Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Neural Exploitation and Exploration of Contextual Bandits [51.25537742455235]
We study utilizing neural networks for the exploitation and exploration of contextual multi-armed bandits.
EE-Net is a novel neural-based exploitation and exploration strategy.
We show that EE-Net outperforms related linear and neural contextual bandit baselines on real-world datasets.
arXiv Detail & Related papers (2023-05-05T18:34:49Z) - Asynchronous Deep Double Duelling Q-Learning for Trading-Signal
Execution in Limit Order Book Markets [5.202524136984542]
We employ deep reinforcement learning to train an agent to translate a high-frequency trading signal into a trading strategy that places individual limit orders.
Based on the ABIDES limit order book simulator, we build a reinforcement learning OpenAI gym environment.
We find that the RL agent learns an effective trading strategy for inventory management and order placing that outperforms a benchmark trading strategy having access to the same signal.
arXiv Detail & Related papers (2023-01-20T17:19:18Z) - MCTG:Multi-frequency continuous-share trading algorithm with GARCH based
on deep reinforcement learning [5.1727003187913665]
We propose an algorithm called the Multi-frequency Continuous-share Trading algorithm with GARCH (MCTG) to solve the problems above.
The latter with a continuous action space of the reinforcement learning algorithm is used to solve the problem of trading stock shares.
Experiments in different industries of Chinese stock market show our method achieves more extra profit comparing with basic DRL methods and bench model.
arXiv Detail & Related papers (2021-05-08T08:00:56Z) - Evaluating data augmentation for financial time series classification [85.38479579398525]
We evaluate several augmentation methods applied to stocks datasets using two state-of-the-art deep learning models.
For a relatively small dataset augmentation methods achieve up to $400%$ improvement in risk adjusted return performance.
For a larger stock dataset augmentation methods achieve up to $40%$ improvement.
arXiv Detail & Related papers (2020-10-28T17:53:57Z) - Taking Over the Stock Market: Adversarial Perturbations Against
Algorithmic Traders [47.32228513808444]
We present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques.
We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points.
arXiv Detail & Related papers (2020-10-19T06:28:05Z) - ResNeSt: Split-Attention Networks [86.25490825631763]
We present a modularized architecture, which applies the channel-wise attention on different network branches to leverage their success in capturing cross-feature interactions and learning diverse representations.
Our model, named ResNeSt, outperforms EfficientNet in accuracy and latency trade-off on image classification.
arXiv Detail & Related papers (2020-04-19T20:40:31Z) - An Application of Deep Reinforcement Learning to Algorithmic Trading [4.523089386111081]
This scientific research paper presents an innovative approach based on deep reinforcement learning (DRL) to solve the algorithmic trading problem.
It proposes a novel DRL trading strategy so as to maximise the resulting Sharpe ratio performance indicator on a broad range of stock markets.
The training of the resulting reinforcement learning (RL) agent is entirely based on the generation of artificial trajectories from a limited set of stock market historical data.
arXiv Detail & Related papers (2020-04-07T14:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.