Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training
- URL: http://arxiv.org/abs/2304.11043v2
- Date: Fri, 26 Jan 2024 15:32:10 GMT
- Title: Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training
- Authors: Jiezhu Cheng, Kaizhu Huang, Zibin Zheng
- Abstract summary: We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
- Score: 44.7991257631318
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the stock market, a successful investment requires a good balance between
profits and risks. Based on the learning to rank paradigm, stock recommendation
has been widely studied in quantitative finance to recommend stocks with higher
return ratios for investors. Despite the efforts to make profits, many existing
recommendation approaches still have some limitations in risk control, which
may lead to intolerable paper losses in practical stock investing. To
effectively reduce risks, we draw inspiration from adversarial learning and
propose a novel Split Variational Adversarial Training (SVAT) method for
risk-aware stock recommendation. Essentially, SVAT encourages the stock model
to be sensitive to adversarial perturbations of risky stock examples and
enhances the model's risk awareness by learning from perturbations. To generate
representative adversarial examples as risk indicators, we devise a variational
perturbation generator to model diverse risk factors. Particularly, the
variational architecture enables our method to provide a rough risk
quantification for investors, showing an additional advantage of
interpretability. Experiments on several real-world stock market datasets
demonstrate the superiority of our SVAT method. By lowering the volatility of
the stock recommendation model, SVAT effectively reduces investment risks and
outperforms state-of-the-art baselines by more than 30% in terms of
risk-adjusted profits. All the experimental data and source code are available
at
https://drive.google.com/drive/folders/14AdM7WENEvIp5x5bV3zV_i4Aev21C9g6?usp=sharing.
Related papers
- Mirror Gradient: Towards Robust Multimodal Recommender Systems via
Exploring Flat Local Minima [54.06000767038741]
We analyze multimodal recommender systems from the novel perspective of flat local minima.
We propose a concise yet effective gradient strategy called Mirror Gradient (MG)
We find that the proposed MG can complement existing robust training methods and be easily extended to diverse advanced recommendation models.
arXiv Detail & Related papers (2024-02-17T12:27:30Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Safe Deployment for Counterfactual Learning to Rank with Exposure-Based
Risk Minimization [63.93275508300137]
We introduce a novel risk-aware Counterfactual Learning To Rank method with theoretical guarantees for safe deployment.
Our experimental results demonstrate the efficacy of our proposed method, which is effective at avoiding initial periods of bad performance when little data is available.
arXiv Detail & Related papers (2023-04-26T15:54:23Z) - Robust Risk-Aware Option Hedging [2.405471533561618]
We showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives.
We apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking.
arXiv Detail & Related papers (2023-03-27T13:57:13Z) - Just-In-Time Learning for Operational Risk Assessment in Power Grids [12.939739997360016]
In a grid with a significant share of renewable generation, operators will need additional tools to evaluate the operational risk.
This paper proposes a Just-In-Time Risk Assessment Learning Framework (JITRALF) as an alternative.
JITRALF trains risk surrogates, one for each hour in the day, using Machine Learning (ML) to predict the quantities needed to estimate risk.
arXiv Detail & Related papers (2022-09-26T15:11:27Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Deep Risk Model: A Deep Learning Solution for Mining Latent Risk Factors
to Improve Covariance Matrix Estimation [8.617532047238461]
We propose a deep learning solution to effectively "design" risk factors with neural networks.
Our method can obtain $1.9%$ higher explained variance measured by $R2$ and also reduce the risk of a global minimum variance portfolio.
arXiv Detail & Related papers (2021-07-12T05:30:50Z) - Learning Risk Preferences from Investment Portfolios Using Inverse
Optimization [25.19470942583387]
This paper presents a novel approach of measuring risk preference from existing portfolios using inverse optimization.
We demonstrate our methods on real market data that consists of 20 years of asset pricing and 10 years of mutual fund portfolio holdings.
arXiv Detail & Related papers (2020-10-04T21:29:29Z) - Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning [75.17074235764757]
We present a framework for risk-averse control in a discounted infinite horizon MDP.
MVPI enjoys great flexibility in that any policy evaluation method and risk-neutral control method can be dropped in for risk-averse control off the shelf.
This flexibility reduces the gap between risk-neutral control and risk-averse control and is achieved by working on a novel augmented MDP.
arXiv Detail & Related papers (2020-04-22T22:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.