Constructing Time-Series Momentum Portfolios with Deep Multi-Task
Learning
- URL: http://arxiv.org/abs/2306.13661v1
- Date: Thu, 8 Jun 2023 13:04:44 GMT
- Title: Constructing Time-Series Momentum Portfolios with Deep Multi-Task
Learning
- Authors: Joel Ong, Dorien Herremans
- Abstract summary: We present a new approach using Multi-Task Learning (MTL) in a deep neural network architecture that jointly learns portfolio construction and various auxiliary tasks related to volatility.
We demonstrate that even after accounting for transaction costs of up to 3 basis points, our approach outperforms existing TSMOM strategies.
- Score: 5.88864611435337
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A diversified risk-adjusted time-series momentum (TSMOM) portfolio can
deliver substantial abnormal returns and offer some degree of tail risk
protection during extreme market events. The performance of existing TSMOM
strategies, however, relies not only on the quality of the momentum signal but
also on the efficacy of the volatility estimator. Yet many of the existing
studies have always considered these two factors to be independent. Inspired by
recent progress in Multi-Task Learning (MTL), we present a new approach using
MTL in a deep neural network architecture that jointly learns portfolio
construction and various auxiliary tasks related to volatility, such as
forecasting realized volatility as measured by different volatility estimators.
Through backtesting from January 2000 to December 2020 on a diversified
portfolio of continuous futures contracts, we demonstrate that even after
accounting for transaction costs of up to 3 basis points, our approach
outperforms existing TSMOM strategies. Moreover, experiments confirm that
adding auxiliary tasks indeed boosts the portfolio's performance. These
findings demonstrate that MTL can be a powerful tool in finance.
Related papers
- Do We Truly Need So Many Samples? Multi-LLM Repeated Sampling Efficiently Scales Test-Time Compute [55.330813919992465]
This paper presents a simple, effective, and cost-efficient strategy to improve LLM performance by scaling test-time compute.
Our strategy builds upon the repeated-sampling-then-voting framework, with a novel twist: incorporating multiple models, even weaker ones, to leverage their complementary strengths.
arXiv Detail & Related papers (2025-04-01T13:13:43Z) - MTS: A Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling [0.8642326601683299]
This paper introduces a Deep Reinforcement Learning Portfolio Management Framework with Time-Awareness and Short-Selling.
It addresses limitations in dynamic risk management, exploitation of temporal markets, and incorporation of complex trading strategies such as short-selling.
It consistently achieves higher cumulative returns, Sharpe, Omega, and Sortino ratios, underscoring its effectiveness in balancing risk and return.
arXiv Detail & Related papers (2025-03-06T06:41:17Z) - R-MTLLMF: Resilient Multi-Task Large Language Model Fusion at the Wireless Edge [78.26352952957909]
Multi-task large language models (MTLLMs) are important for many applications at the wireless edge, where users demand specialized models to handle multiple tasks efficiently.
The concept of model fusion via task vectors has emerged as an efficient approach for combining fine-tuning parameters to produce an MTLLM.
In this paper, the problem of enabling edge users to collaboratively craft such MTLMs via tasks vectors is studied, under the assumption of worst-case adversarial attacks.
arXiv Detail & Related papers (2024-11-27T10:57:06Z) - BreakGPT: Leveraging Large Language Models for Predicting Asset Price Surges [55.2480439325792]
This paper introduces BreakGPT, a novel large language model (LLM) architecture adapted specifically for time series forecasting and the prediction of sharp upward movements in asset prices.
We showcase BreakGPT as a promising solution for financial forecasting with minimal training and as a strong competitor for capturing both local and global temporal dependencies.
arXiv Detail & Related papers (2024-11-09T05:40:32Z) - Optimizing Portfolio with Two-Sided Transactions and Lending: A Reinforcement Learning Framework [0.0]
This study presents a Reinforcement Learning-based portfolio management model tailored for high-risk environments.
We implement the model using the Soft Actor-Critic (SAC) agent with a Convolutional Neural Network with Multi-Head Attention.
Tested over two 16-month periods of varying market volatility, the model significantly outperformed benchmarks.
arXiv Detail & Related papers (2024-08-09T23:36:58Z) - Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.
Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.
Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - M$^3$CoT: A Novel Benchmark for Multi-Domain Multi-step Multi-modal Chain-of-Thought [50.576016777061724]
Multi-modal Chain-of-Thought (MCoT) requires models to leverage knowledge from both textual and visual modalities for step-by-step reasoning.
The current MCoT benchmark still faces some challenges: (1) absence of visual modal reasoning, (2) single-step visual modal reasoning, and (3) Domain missing.
We introduce a novel benchmark (M$3$CoT) to address the above challenges, advancing the multi-domain, multi-step, and multi-modal CoT.
arXiv Detail & Related papers (2024-05-26T07:56:30Z) - Developing An Attention-Based Ensemble Learning Framework for Financial Portfolio Optimisation [0.0]
We propose a multi-agent and self-adaptive portfolio optimisation framework integrated with attention mechanisms and time series, namely the MASAAT.
By reconstructing the tokens of financial data in a sequence, the attention-based cross-sectional analysis module and temporal analysis module of each agent can effectively capture the correlations between assets and the dependencies between time points.
The experimental results clearly demonstrate that the MASAAT framework achieves impressive enhancement when compared with many well-known portfolio optimsation approaches.
arXiv Detail & Related papers (2024-04-13T09:10:05Z) - Developing A Multi-Agent and Self-Adaptive Framework with Deep Reinforcement Learning for Dynamic Portfolio Risk Management [1.2016264781280588]
A multi-agent reinforcement learning (RL) approach is proposed to balance the trade-off between the overall portfolio returns and their potential risks.
The obtained empirical results clearly reveal the potential strengths of our proposed MASA framework.
arXiv Detail & Related papers (2024-02-01T11:31:26Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach [29.706515133374193]
We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2022-06-07T08:58:00Z) - Softmax with Regularization: Better Value Estimation in Multi-Agent
Reinforcement Learning [72.28520951105207]
Overestimation in $Q$-learning is an important problem that has been extensively studied in single-agent reinforcement learning.
We propose a novel regularization-based update scheme that penalizes large joint action-values deviating from a baseline.
We show that our method provides a consistent performance improvement on a set of challenging StarCraft II micromanagement tasks.
arXiv Detail & Related papers (2021-03-22T14:18:39Z) - A Modularized and Scalable Multi-Agent Reinforcement Learning-based
System for Financial Portfolio Management [7.6146285961466]
Financial Portfolio Management is one of the most applicable problems in Reinforcement Learning (RL)
MSPM is a novel Multi-agent Reinforcement learning-based system with a modularized and scalable architecture for portfolio management.
Experiments on 8-year U.S. stock markets data prove the effectiveness of MSPM in profits accumulation by its outperformance over existing benchmarks.
arXiv Detail & Related papers (2021-02-06T04:04:57Z) - Deep Reinforcement Learning for Long-Short Portfolio Optimization [7.131902599861306]
This paper constructs a Deep Reinforcement Learning (DRL) portfolio management framework with short-selling mechanisms conforming to actual trading rules.
Key innovations include development of a comprehensive short-selling mechanism in continuous trading that accounts for dynamic evolution of transactions across time periods.
Compared to traditional approaches, this model delivers superior risk-adjusted returns while reducing maximum drawdown.
arXiv Detail & Related papers (2020-12-26T16:25:20Z) - Deep Stock Predictions [58.720142291102135]
We consider the design of a trading strategy that performs portfolio optimization using Long Short Term Memory (LSTM) neural networks.
We then customize the loss function used to train the LSTM to increase the profit earned.
We find the LSTM model with the customized loss function to have an improved performance in the training bot over a regressive baseline such as ARIMA.
arXiv Detail & Related papers (2020-06-08T23:37:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.