Calibration of Derivative Pricing Models: a Multi-Agent Reinforcement
Learning Perspective
- URL: http://arxiv.org/abs/2203.06865v4
- Date: Fri, 6 Oct 2023 12:48:48 GMT
- Title: Calibration of Derivative Pricing Models: a Multi-Agent Reinforcement
Learning Perspective
- Authors: Nelson Vadori
- Abstract summary: One of the most fundamental questions in quantitative finance is the existence of continuous-time diffusion models that fit market prices of a given set of options.
Our contribution is to show how a suitable game theoretical formulation of this problem can help solve this question by leveraging existing developments in modern deep multi-agent reinforcement learning.
- Score: 3.626013617212667
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: One of the most fundamental questions in quantitative finance is the
existence of continuous-time diffusion models that fit market prices of a given
set of options. Traditionally, one employs a mix of intuition, theoretical and
empirical analysis to find models that achieve exact or approximate fits. Our
contribution is to show how a suitable game theoretical formulation of this
problem can help solve this question by leveraging existing developments in
modern deep multi-agent reinforcement learning to search in the space of
stochastic processes. Our experiments show that we are able to learn local
volatility, as well as path-dependence required in the volatility process to
minimize the price of a Bermudan option. Our algorithm can be seen as a
particle method \textit{\`{a} la} Guyon \textit{et} Henry-Labordere where
particles, instead of being designed to ensure $\sigma_{loc}(t,S_t)^2 =
\mathbb{E}[\sigma_t^2|S_t]$, are learning RL-driven agents cooperating towards
more general calibration targets.
Related papers
- Navigating Sparse Molecular Data with Stein Diffusion Guidance [48.21071466968102]
optimal control (SOC) has emerged as a principled framework for fine-tuning diffusion models.<n>A class of training-free approaches has been developed that guides diffusion models using off-the-shelf classifiers on predicted clean samples.<n>We propose a novel training-free guidance framework based on a surrogate optimal control objective.
arXiv Detail & Related papers (2025-07-07T21:14:27Z) - Overcoming Dimensional Factorization Limits in Discrete Diffusion Models through Quantum Joint Distribution Learning [79.65014491424151]
We propose a quantum Discrete Denoising Diffusion Probabilistic Model (QD3PM)<n>It enables joint probability learning through diffusion and denoising in exponentially large Hilbert spaces.<n>This paper establishes a new theoretical paradigm in generative models by leveraging the quantum advantage in joint distribution learning.
arXiv Detail & Related papers (2025-05-08T11:48:21Z) - Scalable Discrete Diffusion Samplers: Combinatorial Optimization and Statistical Physics [7.873510219469276]
We introduce two novel training methods for discrete diffusion samplers.
These methods yield memory-efficient training and achieve state-of-the-art results in unsupervised optimization.
We introduce adaptations of SN-NIS and Neural Chain Monte Carlo that enable for the first time the application of discrete diffusion models to this problem.
arXiv Detail & Related papers (2025-02-12T18:59:55Z) - Model-free Methods for Event History Analysis and Efficient Adjustment (PhD Thesis) [55.2480439325792]
This thesis is a series of independent contributions to statistics unified by a model-free perspective.
The first chapter elaborates on how a model-free perspective can be used to formulate flexible methods that leverage prediction techniques from machine learning.
The second chapter studies the concept of local independence, which describes whether the evolution of one process is directly influenced by another.
arXiv Detail & Related papers (2025-02-11T19:24:09Z) - Informed Correctors for Discrete Diffusion Models [32.87362154118195]
We propose a family of informed correctors that more reliably counteracts discretization error by leveraging information learned by the model.
We also propose $k$-Gillespie's, a sampling algorithm that better utilizes each model evaluation, while still enjoying the speed and flexibility of $tau$-leaping.
Across several real and synthetic datasets, we show that $k$-Gillespie's with informed correctors reliably produces higher quality samples at lower computational cost.
arXiv Detail & Related papers (2024-07-30T23:29:29Z) - Kullback-Leibler Barycentre of Stochastic Processes [0.0]
We consider the problem where an agent aims to combine the views and insights of different experts' models.
We show existence and uniqueness of the barycentre model and proof an explicit representation of the Radon-Nikodym derivative.
Two deep learning algorithms are proposed to find the optimal drift of the combined model.
arXiv Detail & Related papers (2024-07-05T20:45:27Z) - Amortizing intractable inference in diffusion models for vision, language, and control [89.65631572949702]
This paper studies amortized sampling of the posterior over data, $mathbfxsim prm post(mathbfx)propto p(mathbfx)r(mathbfx)$, in a model that consists of a diffusion generative model prior $p(mathbfx)$ and a black-box constraint or function $r(mathbfx)$.
We prove the correctness of a data-free learning objective, relative trajectory balance, for training a diffusion model that samples from
arXiv Detail & Related papers (2024-05-31T16:18:46Z) - Towards Sobolev Pruning [0.0]
We propose to find surrogate models by using sensitivity information throughout the learning and pruning process.
We build on work using Interval Adjoint Significance Analysis for pruning and combine it with the recent advancements in Sobolev Training.
We experimentally underpin the method on an example of pricing a multidimensional option modelled through a differential equation with Brownian motion.
arXiv Detail & Related papers (2023-12-06T14:13:30Z) - Tasks Makyth Models: Machine Learning Assisted Surrogates for Tipping
Points [0.0]
We present a machine learning (ML)-assisted framework for detecting tipping points in the emergent behavior of complex systems.
We construct reduced-order models for the emergent dynamics at different scales.
We contrast the uses of the different models and the effort involved in learning them.
arXiv Detail & Related papers (2023-09-25T17:58:23Z) - Eliminating Lipschitz Singularities in Diffusion Models [51.806899946775076]
We show that diffusion models frequently exhibit the infinite Lipschitz near the zero point of timesteps.
This poses a threat to the stability and accuracy of the diffusion process, which relies on integral operations.
We propose a novel approach, dubbed E-TSDM, which eliminates the Lipschitz of the diffusion model near zero.
arXiv Detail & Related papers (2023-06-20T03:05:28Z) - Towards Faster Non-Asymptotic Convergence for Diffusion-Based Generative
Models [49.81937966106691]
We develop a suite of non-asymptotic theory towards understanding the data generation process of diffusion models.
In contrast to prior works, our theory is developed based on an elementary yet versatile non-asymptotic approach.
arXiv Detail & Related papers (2023-06-15T16:30:08Z) - Diffusion models as plug-and-play priors [98.16404662526101]
We consider the problem of inferring high-dimensional data $mathbfx$ in a model that consists of a prior $p(mathbfx)$ and an auxiliary constraint $c(mathbfx,mathbfy)$.
The structure of diffusion models allows us to perform approximate inference by iterating differentiation through the fixed denoising network enriched with different amounts of noise.
arXiv Detail & Related papers (2022-06-17T21:11:36Z) - How Much is Enough? A Study on Diffusion Times in Score-based Generative
Models [76.76860707897413]
Current best practice advocates for a large T to ensure that the forward dynamics brings the diffusion sufficiently close to a known and simple noise distribution.
We show how an auxiliary model can be used to bridge the gap between the ideal and the simulated forward dynamics, followed by a standard reverse diffusion process.
arXiv Detail & Related papers (2022-06-10T15:09:46Z) - Distributionally Robust Models with Parametric Likelihood Ratios [123.05074253513935]
Three simple ideas allow us to train models with DRO using a broader class of parametric likelihood ratios.
We find that models trained with the resulting parametric adversaries are consistently more robust to subpopulation shifts when compared to other DRO approaches.
arXiv Detail & Related papers (2022-04-13T12:43:12Z) - Compressed particle methods for expensive models with application in
Astronomy and Remote Sensing [15.874578163779047]
We introduce a novel approach where the expensive model is evaluated only in some well-chosen samples.
We provide theoretical results supporting the novel algorithms and give empirical evidence of the performance of the proposed method in several numerical experiments.
Two of them are real-world applications in astronomy and satellite remote sensing.
arXiv Detail & Related papers (2021-07-18T14:45:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.