Deep Hedging, Generative Adversarial Networks, and Beyond
- URL: http://arxiv.org/abs/2103.03913v1
- Date: Fri, 5 Mar 2021 19:41:11 GMT
- Title: Deep Hedging, Generative Adversarial Networks, and Beyond
- Authors: Hyunsu Kim
- Abstract summary: This paper introduces a potential application of deep learning and artificial intelligence in finance, particularly its application in hedging.
We present a framework of a direct policy search reinforcement agent replicating a simple vanilla European call option and use the agent for the model-free delta hedging.
We believe that this RL-based hedging framework is a more efficient way of performing hedging in practice, addressing some of the inherent issues with the classic models.
- Score: 2.253930064459394
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces a potential application of deep learning and artificial
intelligence in finance, particularly its application in hedging. The major
goal encompasses two objectives. First, we present a framework of a direct
policy search reinforcement agent replicating a simple vanilla European call
option and use the agent for the model-free delta hedging. Through the first
part of this paper, we demonstrate how the RNN-based direct policy search RL
agents can perform delta hedging better than the classic Black-Scholes model in
Q-world based on parametrically generated underlying scenarios, particularly
minimizing tail exposures at higher values of the risk aversion parameter. In
the second part of this paper, with the non-parametric paths generated by
time-series GANs from multi-variate temporal space, we illustrate its delta
hedging performance on various values of the risk aversion parameter via the
basic RNN-based RL agent introduced in the first part of the paper, showing
that we can potentially achieve higher average profits with a rather evident
risk-return trade-off. We believe that this RL-based hedging framework is a
more efficient way of performing hedging in practice, addressing some of the
inherent issues with the classic models, providing promising/intuitive hedging
results, and rendering a flexible framework that can be easily paired with
other AI-based models for many other purposes.
Related papers
- KAN based Autoencoders for Factor Models [13.512750745176664]
Inspired by recent advances in Kolmogorov-Arnold Networks (KANs), we introduce a novel approach to latent factor conditional asset pricing models.
Our method introduces a KAN-based autoencoder which surpasses models in both accuracy and interpretability.
Our model offers enhanced flexibility in approximating exposures as nonlinear functions of asset characteristics, while simultaneously providing users with an intuitive framework for interpreting latent factors.
arXiv Detail & Related papers (2024-08-04T02:02:09Z) - Reparameterized Policy Learning for Multimodal Trajectory Optimization [61.13228961771765]
We investigate the challenge of parametrizing policies for reinforcement learning in high-dimensional continuous action spaces.
We propose a principled framework that models the continuous RL policy as a generative model of optimal trajectories.
We present a practical model-based RL method, which leverages the multimodal policy parameterization and learned world model.
arXiv Detail & Related papers (2023-07-20T09:05:46Z) - REX: Rapid Exploration and eXploitation for AI Agents [103.68453326880456]
We propose an enhanced approach for Rapid Exploration and eXploitation for AI Agents called REX.
REX introduces an additional layer of rewards and integrates concepts similar to Upper Confidence Bound (UCB) scores, leading to more robust and efficient AI agent performance.
arXiv Detail & Related papers (2023-07-18T04:26:33Z) - Introducing Foundation Models as Surrogate Models: Advancing Towards
More Practical Adversarial Attacks [15.882687207499373]
No-box adversarial attacks are becoming more practical and challenging for AI systems.
This paper recasts adversarial attack as a downstream task by introducing foundational models as surrogate models.
arXiv Detail & Related papers (2023-07-13T08:10:48Z) - Simplifying Model-based RL: Learning Representations, Latent-space
Models, and Policies with One Objective [142.36200080384145]
We propose a single objective which jointly optimize a latent-space model and policy to achieve high returns while remaining self-consistent.
We demonstrate that the resulting algorithm matches or improves the sample-efficiency of the best prior model-based and model-free RL methods.
arXiv Detail & Related papers (2022-09-18T03:51:58Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Estimating risks of option books using neural-SDE market models [6.319314191226118]
We use an arbitrage-free neural-SDE market model to produce realistic scenarios for the joint dynamics of multiple European options on a single underlying.
We show that our models are more computationally efficient and accurate for evaluating the Value-at-Risk (VaR) of option portfolios, with better coverage performance and less procyclicality than standard filtered historical simulation approaches.
arXiv Detail & Related papers (2022-02-15T02:39:42Z) - Reinforcement Learning as One Big Sequence Modeling Problem [84.84564880157149]
Reinforcement learning (RL) is typically concerned with estimating single-step policies or single-step models.
We view RL as a sequence modeling problem, with the goal being to predict a sequence of actions that leads to a sequence of high rewards.
arXiv Detail & Related papers (2021-06-03T17:58:51Z) - Modeling the Second Player in Distributionally Robust Optimization [90.25995710696425]
We argue for the use of neural generative models to characterize the worst-case distribution.
This approach poses a number of implementation and optimization challenges.
We find that the proposed approach yields models that are more robust than comparable baselines.
arXiv Detail & Related papers (2021-03-18T14:26:26Z) - Detecting and adapting to crisis pattern with context based Deep
Reinforcement Learning [6.224519494738852]
We present an innovative DRL framework consisting in two sub-networks fed respectively with portfolio strategies past performances and standard deviations as well as additional contextual features.
Results on test set show this approach substantially over-performs traditional portfolio optimization methods like Markowitz and is able to detect and anticipate crisis like the current Covid one.
arXiv Detail & Related papers (2020-09-07T12:11:08Z) - Robust pricing and hedging via neural SDEs [0.0]
We develop and analyse novel algorithms needed for efficient use of neural SDEs.
We find robust bounds for prices of derivatives and the corresponding hedging strategies while incorporating relevant market data.
Neural SDEs allow consistent calibration under both the risk-neutral and the real-world measures.
arXiv Detail & Related papers (2020-07-08T14:33:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.