A generative adversarial network approach to calibration of local
stochastic volatility models
- URL: http://arxiv.org/abs/2005.02505v3
- Date: Tue, 29 Sep 2020 09:53:10 GMT
- Title: A generative adversarial network approach to calibration of local
stochastic volatility models
- Authors: Christa Cuchiero and Wahid Khosrawi and Josef Teichmann
- Abstract summary: We propose a fully data-driven approach to calibrate local volatility (LSV) models.
We parametrize the leverage function by a family of feed-forward neural networks and learn their parameters directly from the available market option prices.
This should be seen in the context of neural SDEs and (causal) generative adversarial networks.
- Score: 2.1485350418225244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a fully data-driven approach to calibrate local stochastic
volatility (LSV) models, circumventing in particular the ad hoc interpolation
of the volatility surface. To achieve this, we parametrize the leverage
function by a family of feed-forward neural networks and learn their parameters
directly from the available market option prices. This should be seen in the
context of neural SDEs and (causal) generative adversarial networks: we
generate volatility surfaces by specific neural SDEs, whose quality is assessed
by quantifying, possibly in an adversarial manner, distances to market prices.
The minimization of the calibration functional relies strongly on a variance
reduction technique based on hedging and deep hedging, which is interesting in
its own right: it allows the calculation of model prices and model implied
volatilities in an accurate way using only small sets of sample paths. For
numerical illustration we implement a SABR-type LSV model and conduct a
thorough statistical performance analysis on many samples of implied volatility
smiles, showing the accuracy and stability of the method.
Related papers
- The Risk of Federated Learning to Skew Fine-Tuning Features and
Underperform Out-of-Distribution Robustness [50.52507648690234]
Federated learning has the risk of skewing fine-tuning features and compromising the robustness of the model.
We introduce three robustness indicators and conduct experiments across diverse robust datasets.
Our approach markedly enhances the robustness across diverse scenarios, encompassing various parameter-efficient fine-tuning methods.
arXiv Detail & Related papers (2024-01-25T09:18:51Z) - Delta-AI: Local objectives for amortized inference in sparse graphical models [64.5938437823851]
We present a new algorithm for amortized inference in sparse probabilistic graphical models (PGMs)
Our approach is based on the observation that when the sampling of variables in a PGM is seen as a sequence of actions taken by an agent, sparsity of the PGM enables local credit assignment in the agent's policy learning objective.
We illustrate $Delta$-AI's effectiveness for sampling from synthetic PGMs and training latent variable models with sparse factor structure.
arXiv Detail & Related papers (2023-10-03T20:37:03Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Variational Inference with NoFAS: Normalizing Flow with Adaptive
Surrogate for Computationally Expensive Models [7.217783736464403]
Use of sampling-based approaches such as Markov chain Monte Carlo may become intractable when each likelihood evaluation is computationally expensive.
New approaches combining variational inference with normalizing flow are characterized by a computational cost that grows only linearly with the dimensionality of the latent variable space.
We propose Normalizing Flow with Adaptive Surrogate (NoFAS), an optimization strategy that alternatively updates the normalizing flow parameters and the weights of a neural network surrogate model.
arXiv Detail & Related papers (2021-08-28T14:31:45Z) - Arbitrage-free neural-SDE market models [6.145654286950278]
We develop a nonparametric model for the European options book respecting underlying financial constraints.
We study the inference problem where a model is learnt from discrete time series data of stock and option prices.
We use neural networks as function approximators for the drift and diffusion of the modelled SDE system.
arXiv Detail & Related papers (2021-05-24T00:53:10Z) - Support estimation in high-dimensional heteroscedastic mean regression [2.28438857884398]
We consider a linear mean regression model with random design and potentially heteroscedastic, heavy-tailed errors.
We use a strictly convex, smooth variant of the Huber loss function with tuning parameter depending on the parameters of the problem.
For the resulting estimator we show sign-consistency and optimal rates of convergence in the $ell_infty$ norm.
arXiv Detail & Related papers (2020-11-03T09:46:31Z) - Recurrent Conditional Heteroskedasticity [0.0]
We propose a new class of financial volatility models, called the REcurrent Conditional Heteroskedastic (RECH) models.
In particular, we incorporate auxiliary deterministic processes, governed by recurrent neural networks, into the conditional variance of the traditional conditional heteroskedastic models.
arXiv Detail & Related papers (2020-10-25T08:09:29Z) - Probabilistic Circuits for Variational Inference in Discrete Graphical
Models [101.28528515775842]
Inference in discrete graphical models with variational methods is difficult.
Many sampling-based methods have been proposed for estimating Evidence Lower Bound (ELBO)
We propose a new approach that leverages the tractability of probabilistic circuit models, such as Sum Product Networks (SPN)
We show that selective-SPNs are suitable as an expressive variational distribution, and prove that when the log-density of the target model is aweighted the corresponding ELBO can be computed analytically.
arXiv Detail & Related papers (2020-10-22T05:04:38Z) - Robust pricing and hedging via neural SDEs [0.0]
We develop and analyse novel algorithms needed for efficient use of neural SDEs.
We find robust bounds for prices of derivatives and the corresponding hedging strategies while incorporating relevant market data.
Neural SDEs allow consistent calibration under both the risk-neutral and the real-world measures.
arXiv Detail & Related papers (2020-07-08T14:33:17Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.