Learning the Market: Sentiment-Based Ensemble Trading Agents
- URL: http://arxiv.org/abs/2402.01441v2
- Date: Wed, 20 Nov 2024 06:59:55 GMT
- Title: Learning the Market: Sentiment-Based Ensemble Trading Agents
- Authors: Andrew Ye, James Xu, Vidyut Veedgav, Yi Wang, Yifan Yu, Daniel Yan, Ryan Chen, Vipin Chaudhary, Shuai Xu,
- Abstract summary: We propose and study the integration of sentiment analysis and deep reinforcement learning ensemble algorithms for stock trading.
We show that our approach results in a strategy that is profitable, robust, and risk-minimal.
- Score: 5.005352154557397
- License:
- Abstract: We propose and study the integration of sentiment analysis and deep reinforcement learning ensemble algorithms for stock trading by evaluating strategies capable of dynamically altering their active agent given the concurrent market environment. In particular, we design a simple-yet-effective method for extracting financial sentiment and combine this with improvements on existing trading agents, resulting in a strategy that effectively considers both qualitative market factors and quantitative stock data. We show that our approach results in a strategy that is profitable, robust, and risk-minimal - outperforming the traditional ensemble strategy as well as single agent algorithms and market metrics. Our findings suggest that the conventional practice of switching and reevaluating agents in ensemble every fixed-number of months is sub-optimal, and that a dynamic sentiment-based framework greatly unlocks additional performance. Furthermore, as we have designed our algorithm with simplicity and efficiency in mind, we hypothesize that the transition of our method from historical evaluation towards real-time trading with live data to be relatively simple.
Related papers
- From Novice to Expert: LLM Agent Policy Optimization via Step-wise Reinforcement Learning [62.54484062185869]
We introduce StepAgent, which utilizes step-wise reward to optimize the agent's reinforcement learning process.
We propose implicit-reward and inverse reinforcement learning techniques to facilitate agent reflection and policy adjustment.
arXiv Detail & Related papers (2024-11-06T10:35:11Z) - Cross-border Commodity Pricing Strategy Optimization via Mixed Neural Network for Time Series Analysis [46.26988706979189]
Cross-border commodity pricing determines competitiveness and market share of businesses.
Time series data is of great significance in commodity pricing and can reveal market dynamics and trends.
We propose a new method based on the hybrid neural network model CNN-BiGRU-SSA.
arXiv Detail & Related papers (2024-08-22T03:59:52Z) - Statistical arbitrage in multi-pair trading strategy based on graph clustering algorithms in US equities market [0.0]
The study seeks to develop an effective strategy based on the novel framework of statistical arbitrage based on graph clustering algorithms.
The study seeks to provide an integrated approach to optimal signal detection and risk management.
arXiv Detail & Related papers (2024-06-15T17:25:32Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - An Ensemble Method of Deep Reinforcement Learning for Automated
Cryptocurrency Trading [16.78239969166596]
We propose an ensemble method to improve the generalization performance of trading strategies trained by deep reinforcement learning algorithms.
Our proposed ensemble method improves the out-of-sample performance compared with the benchmarks of a deep reinforcement learning strategy and a passive investment strategy.
arXiv Detail & Related papers (2023-07-27T04:00:09Z) - MERMAIDE: Learning to Align Learners using Model-Based Meta-Learning [62.065503126104126]
We study how a principal can efficiently and effectively intervene on the rewards of a previously unseen learning agent in order to induce desirable outcomes.
This is relevant to many real-world settings like auctions or taxation, where the principal may not know the learning behavior nor the rewards of real people.
We introduce MERMAIDE, a model-based meta-learning framework to train a principal that can quickly adapt to out-of-distribution agents.
arXiv Detail & Related papers (2023-04-10T15:44:50Z) - Efficient Model-based Multi-agent Reinforcement Learning via Optimistic
Equilibrium Computation [93.52573037053449]
H-MARL (Hallucinated Multi-Agent Reinforcement Learning) learns successful equilibrium policies after a few interactions with the environment.
We demonstrate our approach experimentally on an autonomous driving simulation benchmark.
arXiv Detail & Related papers (2022-03-14T17:24:03Z) - Towards Realistic Market Simulations: a Generative Adversarial Networks
Approach [2.381990157809543]
We propose a synthetic market generator based on Conditional Generative Adversarial Networks (CGANs) trained on real aggregate-level historical data.
A CGAN-based "world" agent can generate meaningful orders in response to an experimental agent.
arXiv Detail & Related papers (2021-10-25T22:01:07Z) - A Hybrid Learning Approach to Detecting Regime Switches in Financial
Markets [0.0]
We present a novel framework for the detection of regime switches within the US financial markets.
Using a combination of cluster analysis and classification, we identify regimes in financial markets based on publicly available economic data.
We display the efficacy of the framework by constructing and assessing the performance of two trading strategies based on detected regimes.
arXiv Detail & Related papers (2021-08-05T01:15:19Z) - Universal Trading for Order Execution with Oracle Policy Distillation [99.57416828489568]
We propose a novel universal trading policy optimization framework to bridge the gap between the noisy yet imperfect market states and the optimal action sequences for order execution.
We show that our framework can better guide the learning of the common policy towards practically optimal execution by an oracle teacher with perfect information.
arXiv Detail & Related papers (2021-01-28T05:52:18Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.