A Framework for Empowering Reinforcement Learning Agents with Causal
Analysis: Enhancing Automated Cryptocurrency Trading
- URL: http://arxiv.org/abs/2310.09462v1
- Date: Sat, 14 Oct 2023 01:08:52 GMT
- Title: A Framework for Empowering Reinforcement Learning Agents with Causal
Analysis: Enhancing Automated Cryptocurrency Trading
- Authors: Rasoul Amirzadeh, Dhananjay Thiruvady, Asef Nazari, Mong Shan Ee
- Abstract summary: This study aims to develop a reinforcement learning-based automated trading system for five popular cryptocurrencies.
We present CausalReinforceNet, a framework framed as a decision support system.
We develop two agents using the CausalReinforceNet framework, each based on distinct reinforcement learning algorithms.
- Score: 1.5683566370372715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite advances in artificial intelligence-enhanced trading methods,
developing a profitable automated trading system remains challenging in the
rapidly evolving cryptocurrency market. This study aims to address these
challenges by developing a reinforcement learning-based automated trading
system for five popular altcoins~(cryptocurrencies other than Bitcoin): Binance
Coin, Ethereum, Litecoin, Ripple, and Tether. To this end, we present
CausalReinforceNet, a framework framed as a decision support system. Designed
as the foundational architecture of the trading system, the CausalReinforceNet
framework enhances the capabilities of the reinforcement learning agent through
causal analysis. Within this framework, we use Bayesian networks in the feature
engineering process to identify the most relevant features with causal
relationships that influence cryptocurrency price movements. Additionally, we
incorporate probabilistic price direction signals from dynamic Bayesian
networks to enhance our reinforcement learning agent's decision-making. Due to
the high volatility of the cryptocurrency market, we design our framework to
adopt a conservative approach that limits sell and buy position sizes to manage
risk. We develop two agents using the CausalReinforceNet framework, each based
on distinct reinforcement learning algorithms. The results indicate that our
framework substantially surpasses the Buy-and-Hold benchmark strategy in
profitability. Additionally, both agents generated notable returns on
investment for Binance Coin and Ethereum.
Related papers
- Building crypto portfolios with agentic AI [46.348283638884425]
The rapid growth of crypto markets has opened new opportunities for investors, but at the same time exposed them to high volatility.<n>This paper presents a practical application of a multi-agent system designed to autonomously construct and evaluate crypto-asset allocations.
arXiv Detail & Related papers (2025-07-11T18:03:51Z) - Ring-lite: Scalable Reasoning via C3PO-Stabilized Reinforcement Learning for LLMs [51.21041884010009]
Ring-lite is a Mixture-of-Experts (MoE)-based large language model optimized via reinforcement learning (RL)<n>Our approach matches the performance of state-of-the-art (SOTA) small-scale reasoning models on challenging benchmarks.
arXiv Detail & Related papers (2025-06-17T17:12:34Z) - From Debate to Equilibrium: Belief-Driven Multi-Agent LLM Reasoning via Bayesian Nash Equilibrium [52.28048367430481]
Multi-agent frameworks can boost the reasoning power of large language models (LLMs), but they typically incur heavy computational costs and lack convergence guarantees.<n>We recast multi-LLM coordination as an incomplete-information game and seek a Bayesian Nash equilibrium (BNE)<n>We introduce Efficient Coordination via Nash Equilibrium (ECON), a hierarchical reinforcement-learning paradigm that marries distributed reasoning with centralized final output.
arXiv Detail & Related papers (2025-06-09T23:49:14Z) - Reinforcement Learning Pair Trading: A Dynamic Scaling approach [3.4698840925433774]
Trading crypto-currency is difficult due to the inherent volatility of the crypto-market.
In this work, we combine Reinforcement Learning (RL) with pair trading.
Our results show that RL can significantly outperform manual and traditional pair trading techniques when applied to volatile markets such as cryptocurrencies.
arXiv Detail & Related papers (2024-07-23T00:16:27Z) - A Deep Reinforcement Learning Approach for Trading Optimization in the Forex Market with Multi-Agent Asynchronous Distribution [0.0]
This research pioneers the application of a multi-agent (MA) RL framework with the state-of-the-art Asynchronous Advantage Actor-Critic (A3C) algorithm.
Two different A3C with lock and without lock MA model was proposed and trained on single currency and multi-currency.
The results indicate that both model outperform on Proximal Policy Optimization model.
arXiv Detail & Related papers (2024-05-30T12:07:08Z) - ArCHer: Training Language Model Agents via Hierarchical Multi-Turn RL [80.10358123795946]
We develop a framework for building multi-turn RL algorithms for fine-tuning large language models.
Our framework adopts a hierarchical RL approach and runs two RL algorithms in parallel.
Empirically, we find that ArCHer significantly improves efficiency and performance on agent tasks.
arXiv Detail & Related papers (2024-02-29T18:45:56Z) - Combining Transformer based Deep Reinforcement Learning with
Black-Litterman Model for Portfolio Optimization [0.0]
As a model-free algorithm, deep reinforcement learning (DRL) agent learns and makes decisions by interacting with the environment in an unsupervised way.
We propose a hybrid portfolio optimization model combining the DRL agent and the Black-Litterman (BL) model.
Our DRL agent significantly outperforms various comparison portfolio choice strategies and alternative DRL frameworks by at least 42% in terms of accumulated return.
arXiv Detail & Related papers (2024-02-23T16:01:37Z) - Modelling crypto markets by multi-agent reinforcement learning [0.0]
This study introduces a multi-agent reinforcement learning (MARL) model simulating crypto markets.
It is calibrated to the crypto's daily closing prices of $153$ cryptocurrencies that were continuously traded between 2018 and 2022.
arXiv Detail & Related papers (2024-02-16T16:28:58Z) - Cryptocurrency Portfolio Optimization by Neural Networks [81.20955733184398]
This paper proposes an effective algorithm based on neural networks to take advantage of these investment products.
A deep neural network, which outputs the allocation weight of each asset at a time interval, is trained to maximize the Sharpe ratio.
A novel loss term is proposed to regulate the network's bias towards a specific asset, thus enforcing the network to learn an allocation strategy that is close to a minimum variance strategy.
arXiv Detail & Related papers (2023-10-02T12:33:28Z) - Contrastive UCB: Provably Efficient Contrastive Self-Supervised Learning in Online Reinforcement Learning [92.18524491615548]
Contrastive self-supervised learning has been successfully integrated into the practice of (deep) reinforcement learning (RL)
We study how RL can be empowered by contrastive learning in a class of Markov decision processes (MDPs) and Markov games (MGs) with low-rank transitions.
Under the online setting, we propose novel upper confidence bound (UCB)-type algorithms that incorporate such a contrastive loss with online RL algorithms for MDPs or MGs.
arXiv Detail & Related papers (2022-07-29T17:29:08Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Bitcoin Transaction Strategy Construction Based on Deep Reinforcement
Learning [8.431365407963629]
This study proposes a framework for automatic high-frequency bitcoin transactions based on a deep reinforcement learning algorithm-proximal policy optimization (PPO)
The proposed framework can earn excess returns through both the period of volatility and surge, which opens the door to research on building a single cryptocurrency trading strategy based on deep learning.
arXiv Detail & Related papers (2021-09-30T01:24:03Z) - Adaptive Stochastic ADMM for Decentralized Reinforcement Learning in
Edge Industrial IoT [106.83952081124195]
Reinforcement learning (RL) has been widely investigated and shown to be a promising solution for decision-making and optimal control processes.
We propose an adaptive ADMM (asI-ADMM) algorithm and apply it to decentralized RL with edge-computing-empowered IIoT networks.
Experiment results show that our proposed algorithms outperform the state of the art in terms of communication costs and scalability, and can well adapt to complex IoT environments.
arXiv Detail & Related papers (2021-06-30T16:49:07Z) - Distributed Reinforcement Learning for Cooperative Multi-Robot Object
Manipulation [53.262360083572005]
We consider solving a cooperative multi-robot object manipulation task using reinforcement learning (RL)
We propose two distributed multi-agent RL approaches: distributed approximate RL (DA-RL) and game-theoretic RL (GT-RL)
Although we focus on a small system of two agents in this paper, both DA-RL and GT-RL apply to general multi-agent systems, and are expected to scale well to large systems.
arXiv Detail & Related papers (2020-03-21T00:43:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.