Score-Based Physics-Informed Neural Networks for High-Dimensional
Fokker-Planck Equations
- URL: http://arxiv.org/abs/2402.07465v1
- Date: Mon, 12 Feb 2024 07:59:25 GMT
- Title: Score-Based Physics-Informed Neural Networks for High-Dimensional
Fokker-Planck Equations
- Authors: Zheyuan Hu, Zhongqiang Zhang, George Em Karniadakis, Kenji Kawaguchi
- Abstract summary: We propose a score-based solver to fit the score function in SDEs.
The proposed score-based SDE solver operates in two stages: first, employing SM, SSM, or Score-PINN to acquire the score; and second, solving the LL via an ODE.
The numerical results demonstrate the score-based SDE solver's stability, speed, and performance across different settings.
- Score: 27.164040990410065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Fokker-Planck (FP) equation is a foundational PDE in stochastic
processes. However, curse of dimensionality (CoD) poses challenge when dealing
with high-dimensional FP PDEs. Although Monte Carlo and vanilla
Physics-Informed Neural Networks (PINNs) have shown the potential to tackle
CoD, both methods exhibit numerical errors in high dimensions when dealing with
the probability density function (PDF) associated with Brownian motion. The
point-wise PDF values tend to decrease exponentially as dimension increases,
surpassing the precision of numerical simulations and resulting in substantial
errors. Moreover, due to its massive sampling, Monte Carlo fails to offer fast
sampling. Modeling the logarithm likelihood (LL) via vanilla PINNs transforms
the FP equation into a difficult HJB equation, whose error grows rapidly with
dimension. To this end, we propose a novel approach utilizing a score-based
solver to fit the score function in SDEs. The score function, defined as the
gradient of the LL, plays a fundamental role in inferring LL and PDF and
enables fast SDE sampling. Three fitting methods, Score Matching (SM), Sliced
SM (SSM), and Score-PINN, are introduced. The proposed score-based SDE solver
operates in two stages: first, employing SM, SSM, or Score-PINN to acquire the
score; and second, solving the LL via an ODE using the obtained score.
Comparative evaluations across these methods showcase varying trade-offs. The
proposed method is evaluated across diverse SDEs, including anisotropic OU
processes, geometric Brownian, and Brownian with varying eigenspace. We also
test various distributions, including Gaussian, Log-normal, Laplace, and
Cauchy. The numerical results demonstrate the score-based SDE solver's
stability, speed, and performance across different settings, solidifying its
potential as a solution to CoD for high-dimensional FP equations.
Related papers
- Score-fPINN: Fractional Score-Based Physics-Informed Neural Networks for High-Dimensional Fokker-Planck-Levy Equations [24.86574584293979]
We introduce an innovative approach for solving high-dimensional Fokker-Planck-L'evy (FPL) equations in modeling non-Brownian processes.
We utilize a fractional score function and Physical-informed neural networks (PINN) to lift the curse of dimensionality (CoD) and alleviate numerical overflow from exponentially decaying solutions with dimensions.
arXiv Detail & Related papers (2024-06-17T15:57:23Z) - Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Noise in the reverse process improves the approximation capabilities of
diffusion models [27.65800389807353]
In Score based Generative Modeling (SGMs), the state-of-the-art in generative modeling, reverse processes are known to perform better than their deterministic counterparts.
This paper delves into the heart of this phenomenon, comparing neural ordinary differential equations (ODEs) and neural dimension equations (SDEs) as reverse processes.
We analyze the ability of neural SDEs to approximate trajectories of the Fokker-Planck equation, revealing the advantages of neurality.
arXiv Detail & Related papers (2023-12-13T02:39:10Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Gaussian Mixture Solvers for Diffusion Models [84.83349474361204]
We introduce a novel class of SDE-based solvers called GMS for diffusion models.
Our solver outperforms numerous SDE-based solvers in terms of sample quality in image generation and stroke-based synthesis.
arXiv Detail & Related papers (2023-11-02T02:05:38Z) - An Extreme Learning Machine-Based Method for Computational PDEs in
Higher Dimensions [1.2981626828414923]
We present two effective methods for solving high-dimensional partial differential equations (PDE) based on randomized neural networks.
We present ample numerical simulations for a number of high-dimensional linear/nonlinear stationary/dynamic PDEs to demonstrate their performance.
arXiv Detail & Related papers (2023-09-13T15:59:02Z) - Score-based Diffusion Models in Function Space [140.792362459734]
Diffusion models have recently emerged as a powerful framework for generative modeling.
We introduce a mathematically rigorous framework called Denoising Diffusion Operators (DDOs) for training diffusion models in function space.
We show that the corresponding discretized algorithm generates accurate samples at a fixed cost independent of the data resolution.
arXiv Detail & Related papers (2023-02-14T23:50:53Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Parsimonious Physics-Informed Random Projection Neural Networks for
Initial-Value Problems of ODEs and index-1 DAEs [0.0]
We address a physics-informed neural network based on random projections for the numerical solution of IVPs of nonlinear ODEs in linear-implicit form and index-1 DAEs.
Based on previous works on random projections, we prove the approximation capability of the scheme for ODEs in the canonical form and index-1 DAEs in the semiexplicit form.
arXiv Detail & Related papers (2022-03-10T12:34:46Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Weak SINDy For Partial Differential Equations [0.0]
We extend our Weak SINDy (WSINDy) framework to the setting of partial differential equations (PDEs)
The elimination of pointwise derivative approximations via the weak form enables effective machine-precision recovery of model coefficients from noise-free data.
We demonstrate WSINDy's robustness, speed and accuracy on several challenging PDEs.
arXiv Detail & Related papers (2020-07-06T16:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.