MTS: A Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling
- URL: http://arxiv.org/abs/2503.04143v1
- Date: Thu, 06 Mar 2025 06:41:17 GMT
- Title: MTS: A Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling
- Authors: Fengchen Gu, Zhengyong Jiang, Ángel F. García-Fernández, Angelos Stefanidis, Jionglong Su, Huakang Li,
- Abstract summary: This paper introduces a Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling.<n>It addresses limitations in dynamic risk management, exploitation of temporal markets, and incorporation of complex trading strategies such as short-selling.<n>It consistently achieves higher cumulative returns, Sharpe, Omega, and Sortino ratios, underscoring its effectiveness in balancing risk and return.
- Score: 0.8642326601683299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Portfolio management remains a crucial challenge in finance, with traditional methods often falling short in complex and volatile market environments. While deep reinforcement approaches have shown promise, they still face limitations in dynamic risk management, exploitation of temporal markets, and incorporation of complex trading strategies such as short-selling. These limitations can lead to suboptimal portfolio performance, increased vulnerability to market volatility, and missed opportunities in capturing potential returns from diverse market conditions. This paper introduces a Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling (MTS), offering a robust and adaptive strategy for sustainable investment performance. This framework utilizes a novel encoder-attention mechanism to address the limitations by incorporating temporal market characteristics, a parallel strategy for automated short-selling based on market trends, and risk management through innovative Incremental Conditional Value at Risk, enhancing adaptability and performance. Experimental validation on five diverse datasets from 2019 to 2023 demonstrates MTS's superiority over traditional algorithms and advanced machine learning techniques. MTS consistently achieves higher cumulative returns, Sharpe, Omega, and Sortino ratios, underscoring its effectiveness in balancing risk and return while adapting to market dynamics. MTS demonstrates an average relative increase of 30.67% in cumulative returns and 29.33% in Sharpe ratio compared to the next best-performing strategies across various datasets.
Related papers
- Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute [55.330813919992465]
This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.
Our strategy builds upon the repeated-sampling-then-voting framework, with a novel twist: incorporating multiple models, even weaker ones, to leverage their complementary strengths.
arXiv Detail & Related papers (2025-04-01T13:13:43Z) - MetaTrading: An Immersion-Aware Model Trading Framework for Vehicular Metaverse Services [94.61039892220037]
We propose an immersion-aware model trading framework that facilitates data provision for services while ensuring privacy through federated learning (FL)
We design an incentive mechanism to incentivize metaverse users (MUs) to contribute high-value models under resource constraints.
We develop a fully distributed dynamic reward algorithm based on deep reinforcement learning, without accessing any private information about MUs and other MSPs.
arXiv Detail & Related papers (2024-10-25T16:20:46Z) - Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training [38.03693752287459]
Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios.
This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks.
We introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution.
arXiv Detail & Related papers (2024-09-28T15:57:24Z) - Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework [0.0]
This study presents a Reinforcement Learning-based portfolio management model tailored for high-risk environments.
We implement the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention.
Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks.
arXiv Detail & Related papers (2024-08-09T23:36:58Z) - Explainable Post hoc Portfolio Management Financial Policy of a Deep Reinforcement Learning agent [44.99833362998488]
We develop a novel Explainable Deep Reinforcement Learning (XDRL) approach for portfolio management.
By executing our methodology, we can interpret in prediction time the actions of the agent to assess whether they follow the requisites of an investment policy.
arXiv Detail & Related papers (2024-07-19T17:40:39Z) - Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management [1.2016264781280588]
A multi-agent reinforcement learning (RL) approach is proposed to balance the trade-off between the overall portfolio returns and their potential risks.
The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework.
arXiv Detail & Related papers (2024-02-01T11:31:26Z) - IMM: An Imitative Reinforcement Learning Approach with Predictive
Representation Learning for Automatic Market Making [33.23156884634365]
Reinforcement Learning technology has achieved remarkable success in quantitative trading.
Most existing RL-based market making methods focus on optimizing single-price level strategies.
We propose Imitative Market Maker (IMM), a novel RL framework leveraging both knowledge from suboptimal signal-based experts and direct policy interactions.
arXiv Detail & Related papers (2023-08-17T11:04:09Z) - Constructing Time-Series Momentum Portfolios with Deep Multi-Task
Learning [5.88864611435337]
We present a new approach using Multi-Task Learning (MTL) in a deep neural network architecture that jointly learns portfolio construction and various auxiliary tasks related to volatility.
We demonstrate that even after accounting for transaction costs of up to 3 basis points, our approach outperforms existing TSMOM strategies.
arXiv Detail & Related papers (2023-06-08T13:04:44Z) - Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training [44.7991257631318]
We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
arXiv Detail & Related papers (2023-04-20T12:10:12Z) - DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture
Fleeting Intraday Trading Opportunities [33.28409845878758]
We propose DeepScalper, a deep reinforcement learning framework for intraday trading.
We show that DeepScalper significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2021-12-15T15:24:02Z) - Deep Q-Learning Market Makers in a Multi-Agent Simulated Stock Market [58.720142291102135]
This paper focuses precisely on the study of these markets makers strategies from an agent-based perspective.
We propose the application of Reinforcement Learning (RL) for the creation of intelligent market markers in simulated stock markets.
arXiv Detail & Related papers (2021-12-08T14:55:21Z) - Deep Reinforcement Learning for Long-Short Portfolio Optimization [7.131902599861306]
This paper constructs a Deep Reinforcement Learning (DRL) portfolio management framework with short-selling mechanisms conforming to actual trading rules.
Key innovations include development of a comprehensive short-selling mechanism in continuous trading that accounts for dynamic evolution of transactions across time periods.
Compared to traditional approaches, this model delivers superior risk-adjusted returns while reducing maximum drawdown.
arXiv Detail & Related papers (2020-12-26T16:25:20Z) - Time your hedge with Deep Reinforcement Learning [0.0]
Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions.
We present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging
arXiv Detail & Related papers (2020-09-16T06:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.