Monte Carlo Simulation of SDEs using GANs
- URL: http://arxiv.org/abs/2104.01437v1
- Date: Sat, 3 Apr 2021 16:06:30 GMT
- Title: Monte Carlo Simulation of SDEs using GANs
- Authors: Jorino van Rhijn, Cornelis W. Oosterlee, Lech A. Grzelak, Shuaiqiang
Liu
- Abstract summary: We investigate if GANs can also be used to approximate one-dimensional geometric Ito differential equations (SDEs)
Standard GANs are only able to approximate processes in distribution, yielding a weak approximation to the SDE.
A conditional GAN architecture is proposed that enables strong approximation.
We compare the input-output map obtained with the standard GAN and supervised GAN and show experimentally that the standard GAN may fail to provide a path-wise approximation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative adversarial networks (GANs) have shown promising results when
applied on partial differential equations and financial time series generation.
We investigate if GANs can also be used to approximate one-dimensional Ito
stochastic differential equations (SDEs). We propose a scheme that approximates
the path-wise conditional distribution of SDEs for large time steps. Standard
GANs are only able to approximate processes in distribution, yielding a weak
approximation to the SDE. A conditional GAN architecture is proposed that
enables strong approximation. We inform the discriminator of this GAN with the
map between the prior input to the generator and the corresponding output
samples, i.e. we introduce a `supervised GAN'. We compare the input-output map
obtained with the standard GAN and supervised GAN and show experimentally that
the standard GAN may fail to provide a path-wise approximation. The GAN is
trained on a dataset obtained with exact simulation. The architecture was
tested on geometric Brownian motion (GBM) and the Cox-Ingersoll-Ross (CIR)
process. The supervised GAN outperformed the Euler and Milstein schemes in
strong error on a discretisation with large time steps. It also outperformed
the standard conditional GAN when approximating the conditional distribution.
We also demonstrate how standard GANs may give rise to non-parsimonious
input-output maps that are sensitive to perturbations, which motivates the need
for constraints and regularisation on GAN generators.
Related papers
- Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Generative Modelling of L\'{e}vy Area for High Order SDE Simulation [5.9535699822923]
L'evyGAN is a deep-learning model for generating approximate samples of L'evy area conditional on a Brownian increment.
We show that L'evyGAN exhibits state-of-the-art performance across several metrics which measure both the joint and marginal distributions.
arXiv Detail & Related papers (2023-08-04T16:38:37Z) - A New Paradigm for Generative Adversarial Networks based on Randomized
Decision Rules [8.36840154574354]
The Generative Adversarial Network (GAN) was recently introduced in the literature as a novel machine learning method for training generative models.
It has many applications in statistics such as nonparametric clustering and nonparametric conditional independence tests.
In this paper, we identify the reasons why the GAN suffers from this issue, and to address it, we propose a new formulation for the GAN based on randomized decision rules.
arXiv Detail & Related papers (2023-06-23T17:50:34Z) - GANs Settle Scores! [16.317645727944466]
We propose a unified approach to analyzing the generator optimization through variational approach.
In $f$-divergence-minimizing GANs, we show that the optimal generator is the one that matches the score of its output distribution with that of the data distribution.
We propose novel alternatives to $f$-GAN and IPM-GAN training based on score and flow matching, and discriminator-guided Langevin sampling.
arXiv Detail & Related papers (2023-06-02T16:24:07Z) - Tail of Distribution GAN (TailGAN): Generative-
Adversarial-Network-Based Boundary Formation [0.0]
We create a GAN-based tail formation model for anomaly detection, the Tail of distribution GAN (TailGAN)
Using TailGAN, we leverage GANs for anomaly detection and use maximum entropy regularization.
We evaluate TailGAN for identifying Out-of-Distribution (OoD) data and its performance evaluated on MNIST, CIFAR-10, Baggage X-Ray, and OoD data shows competitiveness compared to methods from the literature.
arXiv Detail & Related papers (2021-07-24T17:29:21Z) - Are conditional GANs explicitly conditional? [0.0]
This paper proposes two contributions for conditional Generative Adversarial Networks (cGANs)
The first main contribution is an analysis of cGANs to show that they are not explicitly conditional.
The second contribution is a new method, called acontrario, that explicitly models conditionality for both parts of the adversarial architecture.
arXiv Detail & Related papers (2021-06-28T22:49:27Z) - Autoregressive Score Matching [113.4502004812927]
We propose autoregressive conditional score models (AR-CSM) where we parameterize the joint distribution in terms of the derivatives of univariable log-conditionals (scores)
For AR-CSM models, this divergence between data and model distributions can be computed and optimized efficiently, requiring no expensive sampling or adversarial training.
We show with extensive experimental results that it can be applied to density estimation on synthetic data, image generation, image denoising, and training latent variable models with implicit encoders.
arXiv Detail & Related papers (2020-10-24T07:01:24Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Your GAN is Secretly an Energy-based Model and You Should use
Discriminator Driven Latent Sampling [106.68533003806276]
We show that sampling in latent space can be achieved by sampling in latent space according to an energy-based model induced by the sum of the latent prior log-density and the discriminator output score.
We show that Discriminator Driven Latent Sampling(DDLS) is highly efficient compared to previous methods which work in the high-dimensional pixel space.
arXiv Detail & Related papers (2020-03-12T23:33:50Z) - GANs with Conditional Independence Graphs: On Subadditivity of
Probability Divergences [70.30467057209405]
Generative Adversarial Networks (GANs) are modern methods to learn the underlying distribution of a data set.
GANs are designed in a model-free fashion where no additional information about the underlying distribution is available.
We propose a principled design of a model-based GAN that uses a set of simple discriminators on the neighborhoods of the Bayes-net/MRF.
arXiv Detail & Related papers (2020-03-02T04:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.