Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach
- URL: http://arxiv.org/abs/2207.07578v1
- Date: Tue, 7 Jun 2022 08:58:00 GMT
- Title: Quantitative Stock Investment by Routing Uncertainty-Aware Trading
Experts: A Multi-Task Learning Approach
- Authors: Shuo Sun, Rundong Wang, Bo An
- Abstract summary: We show that existing deep learning methods are sensitive to random seeds and network routers.
We propose a novel two-stage mixture-of-experts (MoE) framework for quantitative investment to mimic the efficient bottom-up trading strategy design workflow of successful trading firms.
AlphaMix significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
- Score: 29.706515133374193
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Quantitative investment is a fundamental financial task that highly relies on
accurate stock prediction and profitable investment decision making. Despite
recent advances in deep learning (DL) have shown stellar performance on
capturing trading opportunities in the stochastic stock market, we observe that
the performance of existing DL methods is sensitive to random seeds and network
initialization. To design more profitable DL methods, we analyze this
phenomenon and find two major limitations of existing works. First, there is a
noticeable gap between accurate financial predictions and profitable investment
strategies. Second, investment decisions are made based on only one individual
predictor without consideration of model uncertainty, which is inconsistent
with the workflow in real-world trading firms. To tackle these two limitations,
we first reformulate quantitative investment as a multi-task learning problem.
Later on, we propose AlphaMix, a novel two-stage mixture-of-experts (MoE)
framework for quantitative investment to mimic the efficient bottom-up trading
strategy design workflow of successful trading firms. In Stage one, multiple
independent trading experts are jointly optimized with an individual
uncertainty-aware loss function. In Stage two, we train neural routers
(corresponding to the role of a portfolio manager) to dynamically deploy these
experts on an as-needed basis. AlphaMix is also a universal framework that is
applicable to various backbone network architectures with consistent
performance gains. Through extensive experiments on long-term real-world data
spanning over five years on two of the most influential financial markets (US
and China), we demonstrate that AlphaMix significantly outperforms many
state-of-the-art baselines in terms of four financial criteria.
Related papers
- Automate Strategy Finding with LLM in Quant investment [4.46212317245124]
We propose a novel framework for quantitative stock investment in portfolio management and alpha mining.
This paper proposes a framework where large language models (LLMs) mine alpha factors from multimodal financial data.
Experiments on the Chinese stock markets demonstrate that this framework significantly outperforms state-of-the-art baselines.
arXiv Detail & Related papers (2024-09-10T07:42:28Z) - When AI Meets Finance (StockAgent): Large Language Model-based Stock Trading in Simulated Real-world Environments [55.19252983108372]
We have developed a multi-agent AI system called StockAgent, driven by LLMs.
The StockAgent allows users to evaluate the impact of different external factors on investor trading.
It avoids the test set leakage issue present in existing trading simulation systems based on AI Agents.
arXiv Detail & Related papers (2024-07-15T06:49:30Z) - Benchmarking Trustworthiness of Multimodal Large Language Models: A Comprehensive Study [51.19622266249408]
MultiTrust is the first comprehensive and unified benchmark on the trustworthiness of MLLMs.
Our benchmark employs a rigorous evaluation strategy that addresses both multimodal risks and cross-modal impacts.
Extensive experiments with 21 modern MLLMs reveal some previously unexplored trustworthiness issues and risks.
arXiv Detail & Related papers (2024-06-11T08:38:13Z) - Combining Deep Learning on Order Books with Reinforcement Learning for
Profitable Trading [0.0]
This work focuses on forecasting returns across multiple horizons using order flow and training three temporal-difference imbalance learning models for five financial instruments.
The results prove potential but require further minimal modifications for consistently profitable trading to fully handle retail trading costs, slippage, and spread fluctuation.
arXiv Detail & Related papers (2023-10-24T15:58:58Z) - Constructing Time-Series Momentum Portfolios with Deep Multi-Task
Learning [5.88864611435337]
We present a new approach using Multi-Task Learning (MTL) in a deep neural network architecture that jointly learns portfolio construction and various auxiliary tasks related to volatility.
We demonstrate that even after accounting for transaction costs of up to 3 basis points, our approach outperforms existing TSMOM strategies.
arXiv Detail & Related papers (2023-06-08T13:04:44Z) - E2EAI: End-to-End Deep Learning Framework for Active Investing [123.52358449455231]
We propose an E2E that covers almost the entire process of factor investing through factor selection, factor combination, stock selection, and portfolio construction.
Experiments on real stock market data demonstrate the effectiveness of our end-to-end deep leaning framework in active investing.
arXiv Detail & Related papers (2023-05-25T10:27:07Z) - Factor Investing with a Deep Multi-Factor Model [123.52358449455231]
We develop a novel deep multi-factor model that adopts industry neutralization and market neutralization modules with clear financial insights.
Tests on real-world stock market data demonstrate the effectiveness of our deep multi-factor model.
arXiv Detail & Related papers (2022-10-22T14:47:11Z) - MetaTrader: An Reinforcement Learning Approach Integrating Diverse
Policies for Portfolio Optimization [17.759687104376855]
We propose a novel two-stage-based approach for portfolio management.
In the first stage, incorporates an imitation learning into the reinforcement learning framework.
In the second stage, learns a meta-policy to recognize the market conditions and decide on the most proper learned policy to follow.
arXiv Detail & Related papers (2022-09-01T07:58:06Z) - Bayesian Bilinear Neural Network for Predicting the Mid-price Dynamics
in Limit-Order Book Markets [84.90242084523565]
Traditional time-series econometric methods often appear incapable of capturing the true complexity of the multi-level interactions driving the price dynamics.
By adopting a state-of-the-art second-order optimization algorithm, we train a Bayesian bilinear neural network with temporal attention.
By addressing the use of predictive distributions to analyze errors and uncertainties associated with the estimated parameters and model forecasts, we thoroughly compare our Bayesian model with traditional ML alternatives.
arXiv Detail & Related papers (2022-03-07T18:59:54Z) - DeepScalper: A Risk-Aware Reinforcement Learning Framework to Capture
Fleeting Intraday Trading Opportunities [33.28409845878758]
We propose DeepScalper, a deep reinforcement learning framework for intraday trading.
We show that DeepScalper significantly outperforms many state-of-the-art baselines in terms of four financial criteria.
arXiv Detail & Related papers (2021-12-15T15:24:02Z) - Reinforcement-Learning based Portfolio Management with Augmented Asset
Movement Prediction States [71.54651874063865]
Portfolio management (PM) aims to achieve investment goals such as maximal profits or minimal risks.
In this paper, we propose SARL, a novel State-Augmented RL framework for PM.
Our framework aims to address two unique challenges in financial PM: (1) data Heterogeneous data -- the collected information for each asset is usually diverse, noisy and imbalanced (e.g., news articles); and (2) environment uncertainty -- the financial market is versatile and non-stationary.
arXiv Detail & Related papers (2020-02-09T08:10:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.