Learning stochastic dynamical systems with neural networks mimicking the
Euler-Maruyama scheme
- URL: http://arxiv.org/abs/2105.08449v1
- Date: Tue, 18 May 2021 11:41:34 GMT
- Title: Learning stochastic dynamical systems with neural networks mimicking the
Euler-Maruyama scheme
- Authors: Noura Dridi, Lucas Drumetz, Ronan Fablet
- Abstract summary: We propose a data driven approach where parameters of the SDE are represented by a neural network with a built-in SDE integration scheme.
The algorithm is applied to the geometric brownian motion and a version of the Lorenz-63 model.
- Score: 14.436723124352817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic differential equations (SDEs) are one of the most important
representations of dynamical systems. They are notable for the ability to
include a deterministic component of the system and a stochastic one to
represent random unknown factors. However, this makes learning SDEs much more
challenging than ordinary differential equations (ODEs). In this paper, we
propose a data driven approach where parameters of the SDE are represented by a
neural network with a built-in SDE integration scheme. The loss function is
based on a maximum likelihood criterion, under order one Markov Gaussian
assumptions. The algorithm is applied to the geometric brownian motion and a
stochastic version of the Lorenz-63 model. The latter is particularly hard to
handle due to the presence of a stochastic component that depends on the state.
The algorithm performance is attested using different simulations results.
Besides, comparisons are performed with the reference gradient matching method
used for non linear drift estimation, and a neural networks-based method, that
does not consider the stochastic term.
Related papers
- Learning Controlled Stochastic Differential Equations [61.82896036131116]
This work proposes a novel method for estimating both drift and diffusion coefficients of continuous, multidimensional, nonlinear controlled differential equations with non-uniform diffusion.
We provide strong theoretical guarantees, including finite-sample bounds for (L2), (Linfty), and risk metrics, with learning rates adaptive to coefficients' regularity.
Our method is available as an open-source Python library.
arXiv Detail & Related papers (2024-11-04T11:09:58Z) - Noise in the reverse process improves the approximation capabilities of
diffusion models [27.65800389807353]
In Score based Generative Modeling (SGMs), the state-of-the-art in generative modeling, reverse processes are known to perform better than their deterministic counterparts.
This paper delves into the heart of this phenomenon, comparing neural ordinary differential equations (ODEs) and neural dimension equations (SDEs) as reverse processes.
We analyze the ability of neural SDEs to approximate trajectories of the Fokker-Planck equation, revealing the advantages of neurality.
arXiv Detail & Related papers (2023-12-13T02:39:10Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - Learning Subgrid-scale Models with Neural Ordinary Differential
Equations [0.39160947065896795]
We propose a new approach to learning the subgrid-scale model when simulating partial differential equations (PDEs)
In this approach neural networks are used to learn the coarse- to fine-grid map, which can be viewed as subgrid-scale parameterization.
Our method inherits the advantages of NODEs and can be used to parameterize subgrid scales, approximate coupling operators, and improve the efficiency of low-order solvers.
arXiv Detail & Related papers (2022-12-20T02:45:09Z) - PI-VAE: Physics-Informed Variational Auto-Encoder for stochastic
differential equations [2.741266294612776]
We propose a new class of physics-informed neural networks, called physics-informed Variational Autoencoder (PI-VAE)
PI-VAE consists of a variational autoencoder (VAE), which generates samples of system variables and parameters.
The satisfactory accuracy and efficiency of the proposed method are numerically demonstrated in comparison with physics-informed generative adversarial network (PI-WGAN)
arXiv Detail & Related papers (2022-03-21T21:51:19Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Scaling Structured Inference with Randomization [64.18063627155128]
We propose a family of dynamic programming (RDP) randomized for scaling structured models to tens of thousands of latent states.
Our method is widely applicable to classical DP-based inference.
It is also compatible with automatic differentiation so can be integrated with neural networks seamlessly.
arXiv Detail & Related papers (2021-12-07T11:26:41Z) - Learning effective stochastic differential equations from microscopic
simulations: combining stochastic numerics and deep learning [0.46180371154032895]
We approximate drift and diffusivity functions in effective SDE through neural networks.
Our approach does not require long trajectories, works on scattered snapshot data, and is designed to naturally handle different time steps per snapshot.
arXiv Detail & Related papers (2021-06-10T13:00:18Z) - Weak SINDy For Partial Differential Equations [0.0]
We extend our Weak SINDy (WSINDy) framework to the setting of partial differential equations (PDEs)
The elimination of pointwise derivative approximations via the weak form enables effective machine-precision recovery of model coefficients from noise-free data.
We demonstrate WSINDy's robustness, speed and accuracy on several challenging PDEs.
arXiv Detail & Related papers (2020-07-06T16:03:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Stochasticity in Neural ODEs: An Empirical Study [68.8204255655161]
Regularization of neural networks (e.g. dropout) is a widespread technique in deep learning that allows for better generalization.
We show that data augmentation during the training improves the performance of both deterministic and versions of the same model.
However, the improvements obtained by the data augmentation completely eliminate the empirical regularization gains, making the performance of neural ODE and neural SDE negligible.
arXiv Detail & Related papers (2020-02-22T22:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.