A Framework for Empowering Reinforcement Learning Agents with Causal
Analysis: Enhancing Automated Cryptocurrency Trading
- URL: http://arxiv.org/abs/2310.09462v1
- Date: Sat, 14 Oct 2023 01:08:52 GMT
- Title: A Framework for Empowering Reinforcement Learning Agents with Causal
Analysis: Enhancing Automated Cryptocurrency Trading
- Authors: Rasoul Amirzadeh, Dhananjay Thiruvady, Asef Nazari, Mong Shan Ee
- Abstract summary: This study aims to develop a reinforcement learning-based automated trading system for five popular cryptocurrencies.
We present CausalReinforceNet, a framework framed as a decision support system.
We develop two agents using the CausalReinforceNet framework, each based on distinct reinforcement learning algorithms.
- Score: 1.5683566370372715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite advances in artificial intelligence-enhanced trading methods,
developing a profitable automated trading system remains challenging in the
rapidly evolving cryptocurrency market. This study aims to address these
challenges by developing a reinforcement learning-based automated trading
system for five popular altcoins~(cryptocurrencies other than Bitcoin): Binance
Coin, Ethereum, Litecoin, Ripple, and Tether. To this end, we present
CausalReinforceNet, a framework framed as a decision support system. Designed
as the foundational architecture of the trading system, the CausalReinforceNet
framework enhances the capabilities of the reinforcement learning agent through
causal analysis. Within this framework, we use Bayesian networks in the feature
engineering process to identify the most relevant features with causal
relationships that influence cryptocurrency price movements. Additionally, we
incorporate probabilistic price direction signals from dynamic Bayesian
networks to enhance our reinforcement learning agent's decision-making. Due to
the high volatility of the cryptocurrency market, we design our framework to
adopt a conservative approach that limits sell and buy position sizes to manage
risk. We develop two agents using the CausalReinforceNet framework, each based
on distinct reinforcement learning algorithms. The results indicate that our
framework substantially surpasses the Buy-and-Hold benchmark strategy in
profitability. Additionally, both agents generated notable returns on
investment for Binance Coin and Ethereum.
Related papers
- Cryptocurrency Price Forecasting Using XGBoost Regressor and Technical Indicators [2.038893829552158]
This study introduces a machine learning approach to predict cryptocurrency prices.
We make use of important technical indicators such as Exponential Moving Average (EMA) and Moving Average Convergence Divergence (MACD) to train and feed the XGBoost regressor model.
We evaluate the model's performance through various simulations, showing promising results.
arXiv Detail & Related papers (2024-07-16T14:41:27Z) - IT Strategic alignment in the decentralized finance (DeFi): CBDC and digital currencies [49.1574468325115]
Decentralized finance (DeFi) is a disruptive-based financial infrastructure.
This paper seeks to answer two main questions 1) What are the common IT elements in the DeFi?
And 2) How the elements to the IT strategic alignment in DeFi?
arXiv Detail & Related papers (2024-05-17T10:19:20Z) - DAM: A Universal Dual Attention Mechanism for Multimodal Timeseries Cryptocurrency Trend Forecasting [3.8965079384103865]
This paper presents a novel Dual Attention Mechanism (DAM) for forecasting cryptocurrency trends using multimodal time-series data.
Our approach integrates critical cryptocurrency metrics with sentiment data from news and social media analyzed through CryptoBERT.
By combining elements of distributed systems, natural language processing, and financial forecasting, our method outperforms conventional models like LSTM and Transformer by up to 20% in prediction accuracy.
arXiv Detail & Related papers (2024-05-01T13:58:01Z) - Enhancing Trust and Privacy in Distributed Networks: A Comprehensive Survey on Blockchain-based Federated Learning [51.13534069758711]
Decentralized approaches like blockchain offer a compelling solution by implementing a consensus mechanism among multiple entities.
Federated Learning (FL) enables participants to collaboratively train models while safeguarding data privacy.
This paper investigates the synergy between blockchain's security features and FL's privacy-preserving model training capabilities.
arXiv Detail & Related papers (2024-03-28T07:08:26Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Causal Feature Engineering of Price Directions of Cryptocurrencies using Dynamic Bayesian Networks [1.4356611205757077]
Despite their growing popularity, cryptocurrencies remain a high-risk investment due to their price volatility and uncertainty.
This paper proposes a dynamic Bayesian network (DBN) approach, which can predict the price direction of five popular.
other than Bitcoin in the next trading day.
arXiv Detail & Related papers (2023-06-13T22:07:51Z) - Uniswap Liquidity Provision: An Online Learning Approach [49.145538162253594]
Decentralized Exchanges (DEXs) are new types of marketplaces leveraging technology.
One such DEX, Uniswap v3, allows liquidity providers to allocate funds more efficiently by specifying an active price interval for their funds.
This introduces the problem of finding an optimal strategy for choosing price intervals.
We formalize this problem as an online learning problem with non-stochastic rewards.
arXiv Detail & Related papers (2023-02-01T17:21:40Z) - Profitable Strategy Design by Using Deep Reinforcement Learning for
Trades on Cryptocurrency Markets [2.741266294612776]
We have applied Proximal Policy Optimization, Soft Actor-C Imitation and Generative Adversarialritic Learning to strategy design problem of three cryptocurrency markets.
Our test results on unseen data shows a great potential for this approach in helping investors with an expert system to exploit the market and gain profit.
arXiv Detail & Related papers (2022-01-15T18:45:03Z) - Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning [8.431365407963629]
This study proposes a framework for automatic high-frequency bitcoin transactions based on a deep reinforcement learning algorithm-proximal policy optimization (PPO)
The proposed framework can earn excess returns through both the period of volatility and surge, which opens the door to research on building a single cryptocurrency trading strategy based on deep learning.
arXiv Detail & Related papers (2021-09-30T01:24:03Z) - Taking Over the Stock Market: Adversarial Perturbations Against
Algorithmic Traders [47.32228513808444]
We present a realistic scenario in which an attacker influences algorithmic trading systems by using adversarial learning techniques.
We show that when added to the input stream, our perturbation can fool the trading algorithms at future unseen data points.
arXiv Detail & Related papers (2020-10-19T06:28:05Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.