Entropy-dissipation Informed Neural Network for McKean-Vlasov Type PDEs
- URL: http://arxiv.org/abs/2303.11205v2
- Date: Fri, 27 Oct 2023 09:37:44 GMT
- Title: Entropy-dissipation Informed Neural Network for McKean-Vlasov Type PDEs
- Authors: Zebang Shen and Zhenfu Wang
- Abstract summary: We extend the concept of self-consistency for the Fokker-Planck equation (FPE) to the more general McKean-Vlasov equation (MVE)
We show that a generalized self-consistency potential controls the KL-divergence between a hypothesis solution to the ground truth, through entropy dissipation.
We propose to solve the MVEs by minimizing this potential function, while utilizing the neural networks for function approximation.
- Score: 11.91922476172335
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We extend the concept of self-consistency for the Fokker-Planck equation
(FPE) to the more general McKean-Vlasov equation (MVE). While FPE describes the
macroscopic behavior of particles under drift and diffusion, MVE accounts for
the additional inter-particle interactions, which are often highly singular in
physical systems. Two important examples considered in this paper are the MVE
with Coulomb interactions and the vorticity formulation of the 2D Navier-Stokes
equation. We show that a generalized self-consistency potential controls the
KL-divergence between a hypothesis solution to the ground truth, through
entropy dissipation. Built on this result, we propose to solve the MVEs by
minimizing this potential function, while utilizing the neural networks for
function approximation. We validate the empirical performance of our approach
by comparing with state-of-the-art NN-based PDE solvers on several example
problems.
Related papers
- Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Enhancing Solutions for Complex PDEs: Introducing Complementary Convolution and Equivariant Attention in Fourier Neural Operators [17.91230192726962]
We propose a novel hierarchical Fourier neural operator along with convolution-residual layers and attention mechanisms to solve complex PDEs.
We find that the proposed method achieves superior performance in these PDE benchmarks, especially for equations characterized by rapid coefficient variations.
arXiv Detail & Related papers (2023-11-21T11:04:13Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Generative Diffusion Models for Lattice Field Theory [8.116039964888353]
This study delves into the connection between machine learning and lattice field theory by linking generative diffusion models (DMs) with quantization.
We show that DMs can be conceptualized by reversing a process driven by the Langevin equation, which then produces samples from an initial distribution to approximate the target distribution.
arXiv Detail & Related papers (2023-11-06T22:24:28Z) - Maximum-likelihood Estimators in Physics-Informed Neural Networks for
High-dimensional Inverse Problems [0.0]
Physics-informed neural networks (PINNs) have proven a suitable mathematical scaffold for solving inverse ordinary (ODE) and partial differential equations (PDE)
In this work, we demonstrate that inverse PINNs can be framed in terms of maximum-likelihood estimators (MLE) to allow explicit error propagation from to the physical model space through Taylor expansion.
arXiv Detail & Related papers (2023-04-12T17:15:07Z) - A mixed formulation for physics-informed neural networks as a potential
solver for engineering problems in heterogeneous domains: comparison with
finite element method [0.0]
Physics-informed neural networks (PINNs) are capable of finding the solution for a given boundary value problem.
We employ several ideas from the finite element method (FEM) to enhance the performance of existing PINNs in engineering problems.
arXiv Detail & Related papers (2022-06-27T08:18:08Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - Intrinsic mechanisms for drive-dependent Purcell decay in
superconducting quantum circuits [68.8204255655161]
We find that in a wide range of settings, the cavity-qubit detuning controls whether a non-zero photonic population increases or decreases qubit decay Purcell.
Our method combines insights from a Keldysh treatment of the system, and Lindblad theory.
arXiv Detail & Related papers (2021-06-09T16:21:31Z) - Loss function based second-order Jensen inequality and its application
to particle variational inference [112.58907653042317]
Particle variational inference (PVI) uses an ensemble of models as an empirical approximation for the posterior distribution.
PVI iteratively updates each model with a repulsion force to ensure the diversity of the optimized models.
We derive a novel generalization error bound and show that it can be reduced by enhancing the diversity of models.
arXiv Detail & Related papers (2021-06-09T12:13:51Z) - Probing eigenstate thermalization in quantum simulators via
fluctuation-dissipation relations [77.34726150561087]
The eigenstate thermalization hypothesis (ETH) offers a universal mechanism for the approach to equilibrium of closed quantum many-body systems.
Here, we propose a theory-independent route to probe the full ETH in quantum simulators by observing the emergence of fluctuation-dissipation relations.
Our work presents a theory-independent way to characterize thermalization in quantum simulators and paves the way to quantum simulate condensed matter pump-probe experiments.
arXiv Detail & Related papers (2020-07-20T18:00:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.