Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning
- URL: http://arxiv.org/abs/2109.14789v1
- Date: Thu, 30 Sep 2021 01:24:03 GMT
- Title: Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning
- Authors: Fengrui Liu, Yang Li, Baitong Li, Jiaxin Li, Huiyang Xie
- Abstract summary: This study proposes a framework for automatic high-frequency bitcoin transactions based on a deep reinforcement learning algorithm-proximal policy optimization (PPO)
The proposed framework can earn excess returns through both the period of volatility and surge, which opens the door to research on building a single cryptocurrency trading strategy based on deep learning.
- Score: 8.431365407963629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The emerging cryptocurrency market has lately received great attention for
asset allocation due to its decentralization uniqueness. However, its
volatility and brand new trading mode have made it challenging to devising an
acceptable automatically-generating strategy. This study proposes a framework
for automatic high-frequency bitcoin transactions based on a deep reinforcement
learning algorithm-proximal policy optimization (PPO). The framework creatively
regards the transaction process as actions, returns as awards and prices as
states to align with the idea of reinforcement learning. It compares advanced
machine learning-based models for static price predictions including support
vector machine (SVM), multi-layer perceptron (MLP), long short-term memory
(LSTM), temporal convolutional network (TCN), and Transformer by applying them
to the real-time bitcoin price and the experimental results demonstrate that
LSTM outperforms. Then an automatically-generating transaction strategy is
constructed building on PPO with LSTM as the basis to construct the policy.
Extensive empirical studies validate that the proposed method performs
superiorly to various common trading strategy benchmarks for a single financial
product. The approach is able to trade bitcoins in a simulated environment with
synchronous data and obtains a 31.67% more return than that of the best
benchmark, improving the benchmark by 12.75%. The proposed framework can earn
excess returns through both the period of volatility and surge, which opens the
door to research on building a single cryptocurrency trading strategy based on
deep learning. Visualizations of trading the process show how the model handles
high-frequency transactions to provide inspiration and demonstrate that it can
be expanded to other financial products.
Related papers
- Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training [38.03693752287459]
Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios.
This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks.
We introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution.
arXiv Detail & Related papers (2024-09-28T15:57:24Z) - Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent [44.99833362998488]
We develop a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management.
By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy.
arXiv Detail & Related papers (2024-07-19T17:40:39Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - A Deep Reinforcement Learning Approach for Trading Optimization in the Forex Market with Multi-Agent Asynchronous Distribution [0.0]
This research pioneers the application of a multi-agent (MA) RL framework with the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) algorithm.
Two different A3C with lock and without lock MA model was proposed and trained on single currency and multi-currency.
The results indicate that both model outperform on Proximal Policy Optimization model.
arXiv Detail & Related papers (2024-05-30T12:07:08Z) - A Framework for Empowering Reinforcement Learning Agents with Causal Analysis: Enhancing Automated Cryptocurrency Trading [1.4356611205757077]
This research focuses on developing a reinforcement learning (RL) framework to tackle the complexities of trading five prominent cryptocurrencys: Coin, Litecoin, Ripple, and Tether.
We present the CausalReinforceNet(CRN) framework, which integrates both Bayesian and dynamic Bayesian network techniques to empower the RL agent in trade decision-making.
We develop two agents using the framework based on distinct RL algorithms to analyse performance compared to the Buy-and-Hold benchmark strategy and a baseline RL model.
arXiv Detail & Related papers (2023-10-14T01:08:52Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Commodities Trading through Deep Policy Gradient Methods [0.0]
It formulates the commodities trading problem as a continuous, discrete-time dynamical system.
Two policy algorithms, namely actor-based and actor-critic-based approaches, are introduced.
Backtesting on front-month natural gas futures demonstrates that DRL models increase the Sharpe ratio by $83%$ compared to the buy-and-hold baseline.
arXiv Detail & Related papers (2023-08-10T17:21:12Z) - Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training [44.7991257631318]
We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
arXiv Detail & Related papers (2023-04-20T12:10:12Z) - Can ChatGPT Forecast Stock Price Movements? Return Predictability and Large Language Models [51.3422222472898]
We document the capability of large language models (LLMs) like ChatGPT to predict stock price movements using news headlines.
We develop a theoretical model incorporating information capacity constraints, underreaction, limits-to-arbitrage, and LLMs.
arXiv Detail & Related papers (2023-04-15T19:22:37Z) - Deep Learning Statistical Arbitrage [0.0]
We propose a unifying conceptual framework for statistical arbitrage and develop a novel deep learning solution.
We construct arbitrage portfolios of similar assets as residual portfolios from conditional latent asset pricing factors.
We extract the time series signals of these residual portfolios with one of the most powerful machine learning time-series solutions.
arXiv Detail & Related papers (2021-06-08T00:48:25Z) - GA-MSSR: Genetic Algorithm Maximizing Sharpe and Sterling Ratio Method
for RoboTrading [0.4568777157687961]
Foreign exchange is the largest financial market in the world.
Most literature used historical price information and technical indicators for training.
To address this problem, we designed trading rule features that are derived from technical indicators and trading rules.
arXiv Detail & Related papers (2020-08-16T05:33:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.