Reinforcement learning for options on target volatility funds
- URL: http://arxiv.org/abs/2112.01841v1
- Date: Fri, 3 Dec 2021 10:55:11 GMT
- Title: Reinforcement learning for options on target volatility funds
- Authors: Roberto Daluiso, Emanuele Nastasi, Andrea Pallavicini, Stefano Polo
- Abstract summary: We deal with the funding costs rising from hedging the risky securities underlying a target volatility strategy (TVS)
We derive an analytical solution of the problem in the Black and Scholes (BS) scenario.
Then we use Reinforcement Learning (RL) techniques to determine the fund composition leading to the most conservative price under the local volatility (LV) model.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we deal with the funding costs rising from hedging the risky
securities underlying a target volatility strategy (TVS), a portfolio of risky
assets and a risk-free one dynamically rebalanced in order to keep the realized
volatility of the portfolio on a certain level. The uncertainty in the TVS
risky portfolio composition along with the difference in hedging costs for each
component requires to solve a control problem to evaluate the option prices. We
derive an analytical solution of the problem in the Black and Scholes (BS)
scenario. Then we use Reinforcement Learning (RL) techniques to determine the
fund composition leading to the most conservative price under the local
volatility (LV) model, for which an a priori solution is not available. We show
how the performances of the RL agents are compatible with those obtained by
applying path-wise the BS analytical strategy to the TVS dynamics, which
therefore appears competitive also in the LV scenario.
Related papers
- Deep Reinforcement Learning and Mean-Variance Strategies for Responsible Portfolio Optimization [49.396692286192206]
We study the use of deep reinforcement learning for responsible portfolio optimization by incorporating ESG states and objectives.
Our results show that deep reinforcement learning policies can provide competitive performance against mean-variance approaches for responsible portfolio allocation.
arXiv Detail & Related papers (2024-03-25T12:04:03Z) - Model-Based Epistemic Variance of Values for Risk-Aware Policy Optimization [59.758009422067]
We consider the problem of quantifying uncertainty over expected cumulative rewards in model-based reinforcement learning.
We propose a new uncertainty Bellman equation (UBE) whose solution converges to the true posterior variance over values.
We introduce a general-purpose policy optimization algorithm, Q-Uncertainty Soft Actor-Critic (QU-SAC) that can be applied for either risk-seeking or risk-averse policy optimization.
arXiv Detail & Related papers (2023-12-07T15:55:58Z) - Diffusion Variational Autoencoder for Tackling Stochasticity in
Multi-Step Regression Stock Price Prediction [54.21695754082441]
Multi-step stock price prediction over a long-term horizon is crucial for forecasting its volatility.
Current solutions to multi-step stock price prediction are mostly designed for single-step, classification-based predictions.
We combine a deep hierarchical variational-autoencoder (VAE) and diffusion probabilistic techniques to do seq2seq stock prediction.
Our model is shown to outperform state-of-the-art solutions in terms of its prediction accuracy and variance.
arXiv Detail & Related papers (2023-08-18T16:21:15Z) - Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training [44.7991257631318]
We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
arXiv Detail & Related papers (2023-04-20T12:10:12Z) - Robust Risk-Aware Option Hedging [2.405471533561618]
We showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives.
We apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking.
arXiv Detail & Related papers (2023-03-27T13:57:13Z) - Asset Allocation: From Markowitz to Deep Reinforcement Learning [2.0305676256390934]
Asset allocation is an investment strategy that aims to balance risk and reward by constantly redistributing the portfolio's assets.
We conduct an extensive benchmark study to determine the efficacy and reliability of a number of optimization techniques.
arXiv Detail & Related papers (2022-07-14T14:44:04Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Learning Strategies in Decentralized Matching Markets under Uncertain
Preferences [91.3755431537592]
We study the problem of decision-making in the setting of a scarcity of shared resources when the preferences of agents are unknown a priori.
Our approach is based on the representation of preferences in a reproducing kernel Hilbert space.
We derive optimal strategies that maximize agents' expected payoffs.
arXiv Detail & Related papers (2020-10-29T03:08:22Z) - Option Hedging with Risk Averse Reinforcement Learning [34.85783251852863]
We show how risk-averse reinforcement learning can be used to hedge options.
We apply a state-of-the-art risk-averse algorithm to a vanilla option hedging environment.
arXiv Detail & Related papers (2020-10-23T09:08:24Z) - Learning Risk Preferences from Investment Portfolios Using Inverse
Optimization [25.19470942583387]
This paper presents a novel approach of measuring risk preference from existing portfolios using inverse optimization.
We demonstrate our methods on real market data that consists of 20 years of asset pricing and 10 years of mutual fund portfolio holdings.
arXiv Detail & Related papers (2020-10-04T21:29:29Z) - Time your hedge with Deep Reinforcement Learning [0.0]
Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions.
We present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging
arXiv Detail & Related papers (2020-09-16T06:43:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.