Stationary Density Estimation of It\^o Diffusions Using Deep Learning
- URL: http://arxiv.org/abs/2109.03992v1
- Date: Thu, 9 Sep 2021 01:57:14 GMT
- Title: Stationary Density Estimation of It\^o Diffusions Using Deep Learning
- Authors: Yiqi Gu, John Harlim, Senwei Liang, Haizhao Yang
- Abstract summary: We consider the density estimation problem associated with the stationary measure of ergodic Ito diffusions from a discrete-time series.
We employ deep neural networks to approximate the drift and diffusion terms of the SDE.
We establish the convergence of the proposed scheme under appropriate mathematical assumptions.
- Score: 6.8342505943533345
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we consider the density estimation problem associated with the
stationary measure of ergodic It\^o diffusions from a discrete-time series that
approximate the solutions of the stochastic differential equations. To take an
advantage of the characterization of density function through the stationary
solution of a parabolic-type Fokker-Planck PDE, we proceed as follows. First,
we employ deep neural networks to approximate the drift and diffusion terms of
the SDE by solving appropriate supervised learning tasks. Subsequently, we
solve a steady-state Fokker-Plank equation associated with the estimated drift
and diffusion coefficients with a neural-network-based least-squares method. We
establish the convergence of the proposed scheme under appropriate mathematical
assumptions, accounting for the generalization errors induced by regressing the
drift and diffusion coefficients, and the PDE solvers. This theoretical study
relies on a recent perturbation theory of Markov chain result that shows a
linear dependence of the density estimation to the error in estimating the
drift term, and generalization error results of nonparametric regression and of
PDE regression solution obtained with neural-network models. The effectiveness
of this method is reflected by numerical simulations of a two-dimensional
Student's t distribution and a 20-dimensional Langevin dynamics.
Related papers
- Diffusion-PINN Sampler [6.656265182236135]
We introduce a novel diffusion-based sampling algorithm that estimates the drift term by solving the governing partial differential equation of the log-density of the underlying SDE marginals via physics-informed neural networks (PINN)
We prove that the error of log-density approximation can be controlled by the PINN residual loss, enabling us to establish convergence guarantees of DPS.
arXiv Detail & Related papers (2024-10-20T09:02:16Z) - A hybrid FEM-PINN method for time-dependent partial differential equations [9.631238071993282]
We present a hybrid numerical method for solving evolution differential equations (PDEs) by merging the time finite element method with deep neural networks.
The advantages of such a hybrid formulation are twofold: statistical errors are avoided for the integral in the time direction, and the neural network's output can be regarded as a set of reduced spatial basis functions.
arXiv Detail & Related papers (2024-09-04T15:28:25Z) - Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Closing the ODE-SDE gap in score-based diffusion models through the
Fokker-Planck equation [0.562479170374811]
We rigorously describe the range of dynamics and approximations that arise when training score-based diffusion models.
We show numerically that conventional score-based diffusion models can exhibit significant differences between ODE- and SDE-induced distributions.
arXiv Detail & Related papers (2023-11-27T16:44:50Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.
The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Score Approximation, Estimation and Distribution Recovery of Diffusion
Models on Low-Dimensional Data [68.62134204367668]
This paper studies score approximation, estimation, and distribution recovery of diffusion models, when data are supported on an unknown low-dimensional linear subspace.
We show that with a properly chosen neural network architecture, the score function can be both accurately approximated and efficiently estimated.
The generated distribution based on the estimated score function captures the data geometric structures and converges to a close vicinity of the data distribution.
arXiv Detail & Related papers (2023-02-14T17:02:35Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - GANs as Gradient Flows that Converge [3.8707695363745223]
We show that along the gradient flow induced by a distribution-dependent ordinary differential equation, the unknown data distribution emerges as the long-time limit.
The simulation of the ODE is shown equivalent to the training of generative networks (GANs)
This equivalence provides a new "cooperative" view of GANs and, more importantly, sheds new light on the divergence of GANs.
arXiv Detail & Related papers (2022-05-05T20:29:13Z) - Model Reduction and Neural Networks for Parametric PDEs [9.405458160620533]
We develop a framework for data-driven approximation of input-output maps between infinite-dimensional spaces.
The proposed approach is motivated by the recent successes of neural networks and deep learning.
For a class of input-output maps, and suitably chosen probability measures on the inputs, we prove convergence of the proposed approximation methodology.
arXiv Detail & Related papers (2020-05-07T00:09:27Z) - Stochastic Normalizing Flows [52.92110730286403]
We introduce normalizing flows for maximum likelihood estimation and variational inference (VI) using differential equations (SDEs)
Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated, enabling efficient training of neural SDEs.
These SDEs can be used for constructing efficient chains to sample from the underlying distribution of a given dataset.
arXiv Detail & Related papers (2020-02-21T20:47:55Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.