Extracting Stochastic Governing Laws by Nonlocal Kramers-Moyal Formulas
- URL: http://arxiv.org/abs/2108.12570v2
- Date: Wed, 1 Sep 2021 03:55:11 GMT
- Title: Extracting Stochastic Governing Laws by Nonlocal Kramers-Moyal Formulas
- Authors: Yubin Lu, Yang Li and Jinqiao Duan
- Abstract summary: We propose a data-driven approach to extract governing laws with both (Gaussian) Brownian motion and (non-Gaussian) L'evy motion.
We demonstrate that this approach can learn the differential equation with L'evy motion.
- Score: 3.8325907381729496
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of computational techniques and scientific tools,
great progress of data-driven analysis has been made to extract governing laws
of dynamical systems from data. Despite the wide occurrences of non-Gaussian
fluctuations, the effective data-driven methods to identify stochastic
differential equations with non-Gaussian L\'evy noise are relatively few so
far. In this work, we propose a data-driven approach to extract stochastic
governing laws with both (Gaussian) Brownian motion and (non-Gaussian) L\'evy
motion, from short bursts of simulation data. Specifically, we use the
normalizing flows technology to estimate the transition probability density
function (solution of nonlocal Fokker-Planck equation) from data, and then
substitute it into the recently proposed nonlocal Kramers-Moyal formulas to
approximate L\'evy jump measure, drift coefficient and diffusion coefficient.
We demonstrate that this approach can learn the stochastic differential
equation with L\'evy motion. We present examples with one- and two-dimensional,
decoupled and coupled systems to illustrate our method. This approach will
become an effective tool for discovering stochastic governing laws and
understanding complex dynamical behaviors.
Related papers
- Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - Weak Collocation Regression for Inferring Stochastic Dynamics with
L\'{e}vy Noise [8.15076267771005]
We propose a weak form of the Fokker-Planck (FP) equation for extracting dynamics with L'evy noise.
Our approach can simultaneously distinguish mixed noise types, even in multi-dimensional problems.
arXiv Detail & Related papers (2024-03-13T06:54:38Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Rigorous dynamical mean field theory for stochastic gradient descent
methods [17.90683687731009]
We prove closed-form equations for the exact high-dimensionals of a family of first order gradient-based methods.
This includes widely used algorithms such as gradient descent (SGD) or Nesterov acceleration.
arXiv Detail & Related papers (2022-10-12T21:10:55Z) - Extracting stochastic dynamical systems with $\alpha$-stable L\'evy
noise from data [14.230182518492311]
We propose a data-driven method to extract systems with $alpha$-stable L'evy noise from short burst data.
More specifically, we first estimate the L'evy jump measure and noise intensity.
Then we approximate the drift coefficient by combining nonlocal Kramers-Moyal formulas with normalizing flows.
arXiv Detail & Related papers (2021-09-30T06:57:42Z) - Extracting Governing Laws from Sample Path Data of Non-Gaussian
Stochastic Dynamical Systems [4.527698247742305]
We infer equations with non-Gaussian L'evy noise from available data to reasonably predict dynamical behaviors.
We establish a theoretical framework and design a numerical algorithm to compute the asymmetric L'evy jump measure, drift and diffusion.
This method will become an effective tool in discovering the governing laws from available data sets and in understanding the mechanisms underlying complex random phenomena.
arXiv Detail & Related papers (2021-07-21T14:50:36Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - A Data-Driven Approach for Discovering Stochastic Dynamical Systems with
Non-Gaussian Levy Noise [5.17900889163564]
We develop a new data-driven approach to extract governing laws from noisy data sets.
First, we establish a feasible theoretical framework, by expressing the drift coefficient, diffusion coefficient and jump measure.
We then design a numerical algorithm to compute the drift, diffusion coefficient and jump measure, and thus extract a governing equation with Gaussian and non-Gaussian noise.
arXiv Detail & Related papers (2020-05-07T21:29:17Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.