Robust Risk-Aware Option Hedging
- URL: http://arxiv.org/abs/2303.15216v3
- Date: Tue, 26 Dec 2023 18:31:56 GMT
- Title: Robust Risk-Aware Option Hedging
- Authors: David Wu, Sebastian Jaimungal
- Abstract summary: We showcase the potential of robust risk-aware reinforcement learning (RL) in mitigating the risks associated with path-dependent financial derivatives.
We apply this methodology to the hedging of barrier options, and highlight how the optimal hedging strategy undergoes distortions as the agent moves from being risk-averse to risk-seeking.
- Score: 2.405471533561618
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The objectives of option hedging/trading extend beyond mere protection
against downside risks, with a desire to seek gains also driving agent's
strategies. In this study, we showcase the potential of robust risk-aware
reinforcement learning (RL) in mitigating the risks associated with
path-dependent financial derivatives. We accomplish this by leveraging a policy
gradient approach that optimises robust risk-aware performance criteria. We
specifically apply this methodology to the hedging of barrier options, and
highlight how the optimal hedging strategy undergoes distortions as the agent
moves from being risk-averse to risk-seeking. As well as how the agent
robustifies their strategy. We further investigate the performance of the hedge
when the data generating process (DGP) varies from the training DGP, and
demonstrate that the robust strategies outperform the non-robust ones.
Related papers
- Can Perturbations Help Reduce Investment Risks? Risk-Aware Stock
Recommendation via Split Variational Adversarial Training [44.7991257631318]
We propose a novel Split Variational Adrial Training (SVAT) method for risk-aware stock recommendation.
By lowering the volatility of the stock recommendation model, SVAT effectively reduces investment risks and outperforms state-of-the-art baselines by more than 30% in terms of risk-adjusted profits.
arXiv Detail & Related papers (2023-04-20T12:10:12Z) - A Survey of Risk-Aware Multi-Armed Bandits [84.67376599822569]
We review various risk measures of interest, and comment on their properties.
We consider algorithms for the regret minimization setting, where the exploration-exploitation trade-off manifests.
We conclude by commenting on persisting challenges and fertile areas for future research.
arXiv Detail & Related papers (2022-05-12T02:20:34Z) - Efficient Risk-Averse Reinforcement Learning [79.61412643761034]
In risk-averse reinforcement learning (RL), the goal is to optimize some risk measure of the returns.
We prove that under certain conditions this inevitably leads to a local-optimum barrier, and propose a soft risk mechanism to bypass it.
We demonstrate improved risk aversion in maze navigation, autonomous driving, and resource allocation benchmarks.
arXiv Detail & Related papers (2022-05-10T19:40:52Z) - Reinforcement learning for options on target volatility funds [0.0]
We deal with the funding costs rising from hedging the risky securities underlying a target volatility strategy (TVS)
We derive an analytical solution of the problem in the Black and Scholes (BS) scenario.
Then we use Reinforcement Learning (RL) techniques to determine the fund composition leading to the most conservative price under the local volatility (LV) model.
arXiv Detail & Related papers (2021-12-03T10:55:11Z) - Robust Risk-Aware Reinforcement Learning [0.0]
We present a reinforcement learning (RL) approach for robust optimisation of risk-aware performance criteria.
We assess the value of a policy using rank dependent expected utility (RDEU)
To robustify optimal policies against model uncertainty, we assess a policy not by its distribution, but by the worst possible distribution that lies within a Wasserstein ball around it.
arXiv Detail & Related papers (2021-08-23T20:56:34Z) - Policy Gradient Bayesian Robust Optimization for Imitation Learning [49.881386773269746]
We derive a novel policy gradient-style robust optimization approach, PG-BROIL, to balance expected performance and risk.
Results suggest PG-BROIL can produce a family of behaviors ranging from risk-neutral to risk-averse.
arXiv Detail & Related papers (2021-06-11T16:49:15Z) - Risk-Averse Offline Reinforcement Learning [46.383648750385575]
Training Reinforcement Learning (RL) agents in high-stakes applications might be too prohibitive due to the risk associated to exploration.
We present the Offline Risk-Averse Actor-Critic (O-RAAC), a model-free RL algorithm that is able to learn risk-averse policies in a fully offline setting.
arXiv Detail & Related papers (2021-02-10T10:27:49Z) - Time your hedge with Deep Reinforcement Learning [0.0]
Deep Reinforcement Learning (DRL) can tackle this challenge by creating a dynamic dependency between market information and hedging strategies allocation decisions.
We present a realistic and augmented DRL framework that: (i) uses additional contextual information to decide an action, (ii) has a one period lag between observations and actions to account for one day lag turnover of common asset managers to rebalance their hedge, (iii) is fully tested in terms of stability and robustness thanks to a repetitive train test method called anchored walk forward training, similar in spirit to k fold cross validation for time series and (iv) allows managing leverage of our hedging
arXiv Detail & Related papers (2020-09-16T06:43:41Z) - Mean-Variance Policy Iteration for Risk-Averse Reinforcement Learning [75.17074235764757]
We present a framework for risk-averse control in a discounted infinite horizon MDP.
MVPI enjoys great flexibility in that any policy evaluation method and risk-neutral control method can be dropped in for risk-averse control off the shelf.
This flexibility reduces the gap between risk-neutral control and risk-averse control and is achieved by working on a novel augmented MDP.
arXiv Detail & Related papers (2020-04-22T22:23:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.