Squared Wasserstein-2 Distance for Efficient Reconstruction of
Stochastic Differential Equations
- URL: http://arxiv.org/abs/2401.11354v1
- Date: Sun, 21 Jan 2024 00:54:50 GMT
- Title: Squared Wasserstein-2 Distance for Efficient Reconstruction of
Stochastic Differential Equations
- Authors: Mingtao Xia and Xiangting Li and Qijing Shen and Tom Chou
- Abstract summary: We provide an analysis of the squared $W$ distance between two probability distributions associated with Wasserstein differential equations (SDEs)
Based on this analysis, we propose the use of a squared $W$ distance-based loss functions in the textitreconstruction of SDEs from noisy data.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We provide an analysis of the squared Wasserstein-2 ($W_2$) distance between
two probability distributions associated with two stochastic differential
equations (SDEs). Based on this analysis, we propose the use of a squared $W_2$
distance-based loss functions in the \textit{reconstruction} of SDEs from noisy
data. To demonstrate the practicality of our Wasserstein distance-based loss
functions, we performed numerical experiments that demonstrate the efficiency
of our method in reconstructing SDEs that arise across a number of
applications.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Correction to "Wasserstein distance estimates for the distributions of
numerical approximations to ergodic stochastic differential equations" [1.2691047660244337]
method for analyzing non-asymptotic guarantees of numerical discretizations of ergodic SDEs in Wasserstein-2 distance is presented.
arXiv Detail & Related papers (2024-02-13T18:31:55Z) - SA-Solver: Stochastic Adams Solver for Fast Sampling of Diffusion Models [66.67616086310662]
Diffusion Probabilistic Models (DPMs) have achieved considerable success in generation tasks.
As sampling from DPMs is equivalent to solving diffusion SDE or ODE which is time-consuming, numerous fast sampling methods built upon improved differential equation solvers are proposed.
We propose SA-of-r, which is an improved efficient Adams method for solving diffusion SDE to generate data with high quality.
arXiv Detail & Related papers (2023-09-10T12:44:54Z) - Optimal Neural Network Approximation of Wasserstein Gradient Direction
via Convex Optimization [43.6961980403682]
The computation of Wasserstein gradient direction is essential for posterior sampling problems and scientific computing.
We study the variational problem in the family of two-layer networks with squared-ReLU activations, towards which we derive a semi-definite programming (SDP) relaxation.
This SDP can be viewed as an approximation of the Wasserstein gradient in a broader function family including two-layer networks.
arXiv Detail & Related papers (2022-05-26T00:51:12Z) - Scalable Inference in SDEs by Direct Matching of the
Fokker-Planck-Kolmogorov Equation [14.951655356042949]
Simulation-based techniques such as variants of Runge-Kutta are the de facto approach for inference with differential equations (SDEs) in machine learning.
We show how this workflow is fast, scales to high-dimensional latent spaces, and is applicable to scarce-data applications.
arXiv Detail & Related papers (2021-10-29T12:22:55Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Learning High Dimensional Wasserstein Geodesics [55.086626708837635]
We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.
By applying the method of Lagrange multipliers to the dynamic formulation of the optimal transport (OT) problem, we derive a minimax problem whose saddle point is the Wasserstein geodesic.
We then parametrize the functions by deep neural networks and design a sample based bidirectional learning algorithm for training.
arXiv Detail & Related papers (2021-02-05T04:25:28Z) - Two-sample Test using Projected Wasserstein Distance [18.46110328123008]
We develop a projected Wasserstein distance for the two-sample test, a fundamental problem in statistics and machine learning.
A key contribution is to couple optimal projection to find the low dimensional linear mapping to maximize the Wasserstein distance between projected probability distributions.
arXiv Detail & Related papers (2020-10-22T18:08:58Z) - Actor-Critic Algorithm for High-dimensional Partial Differential
Equations [1.5644600570264835]
We develop a deep learning model to solve high-dimensional nonlinear parabolic partial differential equations.
The Markovian property of the BSDE is utilized in designing our neural network architecture.
We demonstrate those improvements by solving a few well-known classes of PDEs.
arXiv Detail & Related papers (2020-10-07T20:53:24Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.