A Meta-Method for Portfolio Management Using Machine Learning for
Adaptive Strategy Selection
- URL: http://arxiv.org/abs/2111.05935v1
- Date: Wed, 10 Nov 2021 20:46:43 GMT
- Title: A Meta-Method for Portfolio Management Using Machine Learning for
Adaptive Strategy Selection
- Authors: Damian Kisiel and Denise Gorse
- Abstract summary: The MPM uses XGBoost to learn how to switch between two risk-based portfolio allocation strategies.
The MPM is shown to possess an excellent out-of-sample risk-reward profile, as measured by the Sharpe ratio.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work proposes a novel portfolio management technique, the Meta Portfolio
Method (MPM), inspired by the successes of meta approaches in the field of
bioinformatics and elsewhere. The MPM uses XGBoost to learn how to switch
between two risk-based portfolio allocation strategies, the Hierarchical Risk
Parity (HRP) and more classical Na\"ive Risk Parity (NRP). It is demonstrated
that the MPM is able to successfully take advantage of the best characteristics
of each strategy (the NRP's fast growth during market uptrends, and the HRP's
protection against drawdowns during market turmoil). As a result, the MPM is
shown to possess an excellent out-of-sample risk-reward profile, as measured by
the Sharpe ratio, and in addition offers a high degree of interpretability of
its asset allocation decisions.
Related papers
- Deep Reinforcement Learning and Mean-Variance Strategies for Responsible Portfolio Optimization [49.396692286192206]
We study the use of deep reinforcement learning for responsible portfolio optimization by incorporating ESG states and objectives.
Our results show that deep reinforcement learning policies can provide competitive performance against mean-variance approaches for responsible portfolio allocation.
arXiv Detail & Related papers (2024-03-25T12:04:03Z) - Risk-Sensitive RL with Optimized Certainty Equivalents via Reduction to
Standard RL [48.1726560631463]
We study Risk-Sensitive Reinforcement Learning with the Optimized Certainty Equivalent (OCE) risk.
We propose two general meta-algorithms via reductions to standard RL.
We show that it learns the optimal risk-sensitive policy while prior algorithms provably fail.
arXiv Detail & Related papers (2024-03-10T21:45:12Z) - Provable Risk-Sensitive Distributional Reinforcement Learning with
General Function Approximation [54.61816424792866]
We introduce a general framework on Risk-Sensitive Distributional Reinforcement Learning (RS-DisRL), with static Lipschitz Risk Measures (LRM) and general function approximation.
We design two innovative meta-algorithms: textttRS-DisRL-M, a model-based strategy for model-based function approximation, and textttRS-DisRL-V, a model-free approach for general value function approximation.
arXiv Detail & Related papers (2024-02-28T08:43:18Z) - Robust Risk-Aware Option Hedging [2.405471533561618]
We showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives.
We apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking.
arXiv Detail & Related papers (2023-03-27T13:57:13Z) - Optimizing Trading Strategies in Quantitative Markets using Multi-Agent
Reinforcement Learning [11.556829339947031]
This paper explores the fusion of two established financial trading strategies, namely the constant proportion portfolio insurance ( CPPI) and the time-invariant portfolio protection (TIPP)
We introduce two novel multi-agent RL (MARL) methods, CPPI-MADDPG and TIPP-MADDPG, tailored for probing strategic trading within quantitative markets.
Our empirical findings reveal that the CPPI-MADDPG and TIPP-MADDPG strategies consistently outpace their traditional counterparts.
arXiv Detail & Related papers (2023-03-15T11:47:57Z) - MetaTrader: An Reinforcement Learning Approach Integrating Diverse
Policies for Portfolio Optimization [17.759687104376855]
We propose a novel two-stage-based approach for portfolio management.
In the first stage, incorporates an imitation learning into the reinforcement learning framework.
In the second stage, learns a meta-policy to recognize the market conditions and decide on the most proper learned policy to follow.
arXiv Detail & Related papers (2022-09-01T07:58:06Z) - Balancing Profit, Risk, and Sustainability for Portfolio Management [0.0]
We develop a novel utility function with the Sharpe ratio representing risk and the environmental, social, and governance score (ESG) representing sustainability.
We show that our system outperforms MADDPG while improving on deep Q-learning approaches by allowing for continuous action spaces.
arXiv Detail & Related papers (2022-06-06T08:38:30Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Permutation Invariant Policy Optimization for Mean-Field Multi-Agent
Reinforcement Learning: A Principled Approach [128.62787284435007]
We propose the mean-field proximal policy optimization (MF-PPO) algorithm, at the core of which is a permutation-invariant actor-critic neural architecture.
We prove that MF-PPO attains the globally optimal policy at a sublinear rate of convergence.
In particular, we show that the inductive bias introduced by the permutation-invariant neural architecture enables MF-PPO to outperform existing competitors.
arXiv Detail & Related papers (2021-05-18T04:35:41Z) - Learning Risk Preferences from Investment Portfolios Using Inverse
Optimization [25.19470942583387]
This paper presents a novel approach of measuring risk preference from existing portfolios using inverse optimization.
We demonstrate our methods on real market data that consists of 20 years of asset pricing and 10 years of mutual fund portfolio holdings.
arXiv Detail & Related papers (2020-10-04T21:29:29Z) - Mixed Strategies for Robust Optimization of Unknown Objectives [93.8672371143881]
We consider robust optimization problems, where the goal is to optimize an unknown objective function against the worst-case realization of an uncertain parameter.
We design a novel sample-efficient algorithm GP-MRO, which sequentially learns about the unknown objective from noisy point evaluations.
GP-MRO seeks to discover a robust and randomized mixed strategy, that maximizes the worst-case expected objective value.
arXiv Detail & Related papers (2020-02-28T09:28:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.