Arbitrage-free neural-SDE market models
- URL: http://arxiv.org/abs/2105.11053v1
- Date: Mon, 24 May 2021 00:53:10 GMT
- Title: Arbitrage-free neural-SDE market models
- Authors: Samuel N. Cohen and Christoph Reisinger and Sheng Wang
- Abstract summary: We develop a nonparametric model for the European options book respecting underlying financial constraints.
We study the inference problem where a model is learnt from discrete time series data of stock and option prices.
We use neural networks as function approximators for the drift and diffusion of the modelled SDE system.
- Score: 6.145654286950278
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modelling joint dynamics of liquid vanilla options is crucial for
arbitrage-free pricing of illiquid derivatives and managing risks of option
trade books. This paper develops a nonparametric model for the European options
book respecting underlying financial constraints and while being practically
implementable. We derive a state space for prices which are free from static
(or model-independent) arbitrage and study the inference problem where a model
is learnt from discrete time series data of stock and option prices. We use
neural networks as function approximators for the drift and diffusion of the
modelled SDE system, and impose constraints on the neural nets such that
no-arbitrage conditions are preserved. In particular, we give methods to
calibrate \textit{neural SDE} models which are guaranteed to satisfy a set of
linear inequalities. We validate our approach with numerical experiments using
data generated from a Heston stochastic local volatility model.
Related papers
- Diffusion models for probabilistic programming [56.47577824219207]
Diffusion Model Variational Inference (DMVI) is a novel method for automated approximate inference in probabilistic programming languages (PPLs)
DMVI is easy to implement, allows hassle-free inference in PPLs without the drawbacks of, e.g., variational inference using normalizing flows, and does not make any constraints on the underlying neural network model.
arXiv Detail & Related papers (2023-11-01T12:17:05Z) - COPlanner: Plan to Roll Out Conservatively but to Explore Optimistically
for Model-Based RL [50.385005413810084]
Dyna-style model-based reinforcement learning contains two phases: model rollouts to generate sample for policy learning and real environment exploration.
$textttCOPlanner$ is a planning-driven framework for model-based methods to address the inaccurately learned dynamics model problem.
arXiv Detail & Related papers (2023-10-11T06:10:07Z) - Neural Abstractions [72.42530499990028]
We present a novel method for the safety verification of nonlinear dynamical models that uses neural networks to represent abstractions of their dynamics.
We demonstrate that our approach performs comparably to the mature tool Flow* on existing benchmark nonlinear models.
arXiv Detail & Related papers (2023-01-27T12:38:09Z) - Error-based Knockoffs Inference for Controlled Feature Selection [49.99321384855201]
We propose an error-based knockoff inference method by integrating the knockoff features, the error-based feature importance statistics, and the stepdown procedure together.
The proposed inference procedure does not require specifying a regression model and can handle feature selection with theoretical guarantees.
arXiv Detail & Related papers (2022-03-09T01:55:59Z) - Estimating risks of option books using neural-SDE market models [6.319314191226118]
We use an arbitrage-free neural-SDE market model to produce realistic scenarios for the joint dynamics of multiple European options on a single underlying.
We show that our models are more computationally efficient and accurate for evaluating the Value-at-Risk (VaR) of option portfolios, with better coverage performance and less procyclicality than standard filtered historical simulation approaches.
arXiv Detail & Related papers (2022-02-15T02:39:42Z) - Latent Time Neural Ordinary Differential Equations [0.2538209532048866]
We propose a novel approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver.
We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times.
We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
arXiv Detail & Related papers (2021-12-23T17:31:47Z) - Improving Robustness and Uncertainty Modelling in Neural Ordinary
Differential Equations [0.2538209532048866]
We propose a novel approach to model uncertainty in NODE by considering a distribution over the end-time $T$ of the ODE solver.
We also propose, adaptive latent time NODE (ALT-NODE), which allow each data point to have a distinct posterior distribution over end-times.
We demonstrate the effectiveness of the proposed approaches in modelling uncertainty and robustness through experiments on synthetic and several real-world image classification data.
arXiv Detail & Related papers (2021-12-23T16:56:10Z) - Generative Temporal Difference Learning for Infinite-Horizon Prediction [101.59882753763888]
We introduce the $gamma$-model, a predictive model of environment dynamics with an infinite probabilistic horizon.
We discuss how its training reflects an inescapable tradeoff between training-time and testing-time compounding errors.
arXiv Detail & Related papers (2020-10-27T17:54:12Z) - Identification of Probability weighted ARX models with arbitrary domains [75.91002178647165]
PieceWise Affine models guarantees universal approximation, local linearity and equivalence to other classes of hybrid system.
In this work, we focus on the identification of PieceWise Auto Regressive with eXogenous input models with arbitrary regions (NPWARX)
The architecture is conceived following the Mixture of Expert concept, developed within the machine learning field.
arXiv Detail & Related papers (2020-09-29T12:50:33Z) - Robust pricing and hedging via neural SDEs [0.0]
We develop and analyse novel algorithms needed for efficient use of neural SDEs.
We find robust bounds for prices of derivatives and the corresponding hedging strategies while incorporating relevant market data.
Neural SDEs allow consistent calibration under both the risk-neutral and the real-world measures.
arXiv Detail & Related papers (2020-07-08T14:33:17Z) - A generative adversarial network approach to calibration of local
stochastic volatility models [2.1485350418225244]
We propose a fully data-driven approach to calibrate local volatility (LSV) models.
We parametrize the leverage function by a family of feed-forward neural networks and learn their parameters directly from the available market option prices.
This should be seen in the context of neural SDEs and (causal) generative adversarial networks.
arXiv Detail & Related papers (2020-05-05T21:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.