DeepParticle: learning invariant measure by a deep neural network
minimizing Wasserstein distance on data generated from an interacting
particle method
- URL: http://arxiv.org/abs/2111.01356v1
- Date: Tue, 2 Nov 2021 03:48:58 GMT
- Title: DeepParticle: learning invariant measure by a deep neural network
minimizing Wasserstein distance on data generated from an interacting
particle method
- Authors: Zhongjian Wang, Jack Xin, Zhiwen Zhang
- Abstract summary: We introduce the so called DeepParticle method to learn and generate invariant measures of dynamical systems.
We use neural deep networks (DNNs) to represent the transform of samples from a given input (source) distribution to an arbitrary target distribution.
In training, we update the network weights to minimize a discrete Wasserstein distance between the input and target samples.
- Score: 3.6310242206800667
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce the so called DeepParticle method to learn and generate
invariant measures of stochastic dynamical systems with physical parameters
based on data computed from an interacting particle method (IPM). We utilize
the expressiveness of deep neural networks (DNNs) to represent the transform of
samples from a given input (source) distribution to an arbitrary target
distribution, neither assuming distribution functions in closed form nor a
finite state space for the samples. In training, we update the network weights
to minimize a discrete Wasserstein distance between the input and target
samples. To reduce computational cost, we propose an iterative
divide-and-conquer (a mini-batch interior point) algorithm, to find the optimal
transition matrix in the Wasserstein distance. We present numerical results to
demonstrate the performance of our method for accelerating IPM computation of
invariant measures of stochastic dynamical systems arising in computing
reaction-diffusion front speeds through chaotic flows. The physical parameter
is a large Pecl\'et number reflecting the advection dominated regime of our
interest.
Related papers
- A hybrid FEM-PINN method for time-dependent partial differential equations [9.631238071993282]
We present a hybrid numerical method for solving evolution differential equations (PDEs) by merging the time finite element method with deep neural networks.
The advantages of such a hybrid formulation are twofold: statistical errors are avoided for the integral in the time direction, and the neural network's output can be regarded as a set of reduced spatial basis functions.
arXiv Detail & Related papers (2024-09-04T15:28:25Z) - Dynamical Measure Transport and Neural PDE Solvers for Sampling [77.38204731939273]
We tackle the task of sampling from a probability density as transporting a tractable density function to the target.
We employ physics-informed neural networks (PINNs) to approximate the respective partial differential equations (PDEs) solutions.
PINNs allow for simulation- and discretization-free optimization and can be trained very efficiently.
arXiv Detail & Related papers (2024-07-10T17:39:50Z) - A Nonoverlapping Domain Decomposition Method for Extreme Learning Machines: Elliptic Problems [0.0]
Extreme learning machine (ELM) is a methodology for solving partial differential equations (PDEs) using a single hidden layer feed-forward neural network.
In this paper, we propose a nonoverlapping domain decomposition method (DDM) for ELMs that not only reduces the training time of ELMs, but is also suitable for parallel computation.
arXiv Detail & Related papers (2024-06-22T23:25:54Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - On the Trajectory Regularity of ODE-based Diffusion Sampling [79.17334230868693]
Diffusion-based generative models use differential equations to establish a smooth connection between a complex data distribution and a tractable prior distribution.
In this paper, we identify several intriguing trajectory properties in the ODE-based sampling process of diffusion models.
arXiv Detail & Related papers (2024-05-18T15:59:41Z) - DynGMA: a robust approach for learning stochastic differential equations from data [13.858051019755283]
We introduce novel approximations to the transition density of the parameterized SDE.
Our method exhibits superior accuracy compared to baseline methods in learning the fully unknown drift diffusion functions.
It is capable of handling data with low time resolution and variable, even uncontrollable, time step sizes.
arXiv Detail & Related papers (2024-02-22T12:09:52Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - A DeepParticle method for learning and generating aggregation patterns
in multi-dimensional Keller-Segel chemotaxis systems [3.6184545598911724]
We study a regularized interacting particle method for computing aggregation patterns and near singular solutions of a Keller-Segal (KS) chemotaxis system in two and three space dimensions.
We further develop DeepParticle (DP) method to learn and generate solutions under variations of physical parameters.
arXiv Detail & Related papers (2022-08-31T20:52:01Z) - Large-Scale Wasserstein Gradient Flows [84.73670288608025]
We introduce a scalable scheme to approximate Wasserstein gradient flows.
Our approach relies on input neural networks (ICNNs) to discretize the JKO steps.
As a result, we can sample from the measure at each step of the gradient diffusion and compute its density.
arXiv Detail & Related papers (2021-06-01T19:21:48Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - A Near-Optimal Gradient Flow for Learning Neural Energy-Based Models [93.24030378630175]
We propose a novel numerical scheme to optimize the gradient flows for learning energy-based models (EBMs)
We derive a second-order Wasserstein gradient flow of the global relative entropy from Fokker-Planck equation.
Compared with existing schemes, Wasserstein gradient flow is a smoother and near-optimal numerical scheme to approximate real data densities.
arXiv Detail & Related papers (2019-10-31T02:26:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.