Learning the Market: Sentiment-Based Ensemble Trading Agents
- URL: http://arxiv.org/abs/2402.01441v1
- Date: Fri, 2 Feb 2024 14:34:22 GMT
- Title: Learning the Market: Sentiment-Based Ensemble Trading Agents
- Authors: Andrew Ye, James Xu, Yi Wang, Yifan Yu, Daniel Yan, Ryan Chen, Bosheng
Dong, Vipin Chaudhary, Shuai Xu
- Abstract summary: We propose the integration of sentiment analysis and deep-reinforcement learning ensemble algorithms for stock trading.
We show that our approach results in a strategy that is profitable, robust, and risk-minimal.
- Score: 5.193582840789407
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose the integration of sentiment analysis and deep-reinforcement
learning ensemble algorithms for stock trading, and design a strategy capable
of dynamically altering its employed agent given concurrent market sentiment.
In particular, we create a simple-yet-effective method for extracting news
sentiment and combine this with general improvements upon existing works,
resulting in automated trading agents that effectively consider both
qualitative market factors and quantitative stock data. We show that our
approach results in a strategy that is profitable, robust, and risk-minimal --
outperforming the traditional ensemble strategy as well as single agent
algorithms and market metrics. Our findings determine that the conventional
practice of switching ensemble agents every fixed-number of months is
sub-optimal, and that a dynamic sentiment-based framework greatly unlocks
additional performance within these agents. Furthermore, as we have designed
our algorithm with simplicity and efficiency in mind, we hypothesize that the
transition of our method from historical evaluation towards real-time trading
with live data should be relatively simple.
Related papers
- Statistical arbitrage in multi-pair trading strategy based on graph clustering algorithms in US equities market [0.0]
The study seeks to develop an effective strategy based on the novel framework of statistical arbitrage based on graph clustering algorithms.
The study seeks to provide an integrated approach to optimal signal detection and risk management.
arXiv Detail & Related papers (2024-06-15T17:25:32Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - An Ensemble Method of Deep Reinforcement Learning for Automated
Cryptocurrency Trading [16.78239969166596]
We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms.
Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.
arXiv Detail & Related papers (2023-07-27T04:00:09Z) - Learning Multi-Agent Intention-Aware Communication for Optimal
Multi-Order Execution in Finance [96.73189436721465]
We first present a multi-agent RL (MARL) method for multi-order execution considering practical constraints.
We propose a learnable multi-round communication protocol, for the agents communicating the intended actions with each other.
Experiments on the data from two real-world markets have illustrated superior performance with significantly better collaboration effectiveness.
arXiv Detail & Related papers (2023-07-06T16:45:40Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Towards Realistic Market Simulations: a Generative Adversarial Networks
Approach [2.381990157809543]
We propose a synthetic market generator based on Conditional Generative Adversarial Networks (CGANs) trained on real aggregate-level historical data.
A CGAN-based "world" agent can generate meaningful orders in response to an experimental agent.
arXiv Detail & Related papers (2021-10-25T22:01:07Z) - A Hybrid Learning Approach to Detecting Regime Switches in Financial
Markets [0.0]
We present a novel framework for the detection of regime switches within the US financial markets.
Using a combination of cluster analysis and classification, we identify regimes in financial markets based on publicly available economic data.
We display the efficacy of the framework by constructing and assessing the performance of two trading strategies based on detected regimes.
arXiv Detail & Related papers (2021-08-05T01:15:19Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z) - Dynamic Federated Learning [57.14673504239551]
Federated learning has emerged as an umbrella term for centralized coordination strategies in multi-agent environments.
We consider a federated learning model where at every iteration, a random subset of available agents perform local updates based on their data.
Under a non-stationary random walk model on the true minimizer for the aggregate optimization problem, we establish that the performance of the architecture is determined by three factors, namely, the data variability at each agent, the model variability across all agents, and a tracking term that is inversely proportional to the learning rate of the algorithm.
arXiv Detail & Related papers (2020-02-20T15:00:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.