The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations
- URL: http://arxiv.org/abs/2009.03202v5
- Date: Thu, 23 Sep 2021 13:25:07 GMT
- Title: The Seven-League Scheme: Deep learning for large time step Monte Carlo
simulations of stochastic differential equations
- Authors: Shuaiqiang Liu and Lech A. Grzelak and Cornelis W. Oosterlee
- Abstract summary: We propose an accurate data-driven numerical scheme to solve Differential Equations (SDEs)
The SDE discretization is built up by means of a chaos expansion method on the basis of accurately determined (SC) points.
With a method called the compression-decompression and collocation technique, we can drastically reduce the number of neural network functions that have to be learned.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an accurate data-driven numerical scheme to solve Stochastic
Differential Equations (SDEs), by taking large time steps. The SDE
discretization is built up by means of a polynomial chaos expansion method, on
the basis of accurately determined stochastic collocation (SC) points. By
employing an artificial neural network to learn these SC points, we can perform
Monte Carlo simulations with large time steps. Error analysis confirms that
this data-driven scheme results in accurate SDE solutions in the sense of
strong convergence, provided the learning methodology is robust and accurate.
With a method variant called the compression-decompression collocation and
interpolation technique, we can drastically reduce the number of neural network
functions that have to be learned, so that computational speed is enhanced.
Numerical experiments confirm a high-quality strong convergence error when
using large time steps, and the novel scheme outperforms some classical
numerical SDE discretizations. Some applications, here in financial option
valuation, are also presented.
Related papers
- A Training-Free Conditional Diffusion Model for Learning Stochastic Dynamical Systems [10.820654486318336]
This study introduces a training-free conditional diffusion model for learning unknown differential equations (SDEs) using data.
The proposed approach addresses key challenges in computational efficiency and accuracy for modeling SDEs.
The learned models exhibit significant improvements in predicting both short-term and long-term behaviors of unknown systems.
arXiv Detail & Related papers (2024-10-04T03:07:36Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Parallel-in-Time Probabilistic Numerical ODE Solvers [35.716255949521305]
Probabilistic numerical solvers for ordinary differential equations (ODEs) treat the numerical simulation of dynamical systems as problems of Bayesian state estimation.
We build on the time-parallel formulation of iterated extended Kalman smoothers to formulate a parallel-in-time probabilistic numerical ODE solver.
arXiv Detail & Related papers (2023-10-02T12:32:21Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Faster One-Sample Stochastic Conditional Gradient Method for Composite
Convex Minimization [61.26619639722804]
We propose a conditional gradient method (CGM) for minimizing convex finite-sum objectives formed as a sum of smooth and non-smooth terms.
The proposed method, equipped with an average gradient (SAG) estimator, requires only one sample per iteration. Nevertheless, it guarantees fast convergence rates on par with more sophisticated variance reduction techniques.
arXiv Detail & Related papers (2022-02-26T19:10:48Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Learning effective stochastic differential equations from microscopic
simulations: combining stochastic numerics and deep learning [0.46180371154032895]
We approximate drift and diffusivity functions in effective SDE through neural networks.
Our approach does not require long trajectories, works on scattered snapshot data, and is designed to naturally handle different time steps per snapshot.
arXiv Detail & Related papers (2021-06-10T13:00:18Z) - Learning stochastic dynamical systems with neural networks mimicking the
Euler-Maruyama scheme [14.436723124352817]
We propose a data driven approach where parameters of the SDE are represented by a neural network with a built-in SDE integration scheme.
The algorithm is applied to the geometric brownian motion and a version of the Lorenz-63 model.
arXiv Detail & Related papers (2021-05-18T11:41:34Z) - Large-scale Neural Solvers for Partial Differential Equations [48.7576911714538]
Solving partial differential equations (PDE) is an indispensable part of many branches of science as many processes can be modelled in terms of PDEs.
Recent numerical solvers require manual discretization of the underlying equation as well as sophisticated, tailored code for distributed computing.
We examine the applicability of continuous, mesh-free neural solvers for partial differential equations, physics-informed neural networks (PINNs)
We discuss the accuracy of GatedPINN with respect to analytical solutions -- as well as state-of-the-art numerical solvers, such as spectral solvers.
arXiv Detail & Related papers (2020-09-08T13:26:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.