Randomized Physics-Informed Neural Networks for Bayesian Data Assimilation
- URL: http://arxiv.org/abs/2407.04617v1
- Date: Fri, 5 Jul 2024 16:16:47 GMT
- Title: Randomized Physics-Informed Neural Networks for Bayesian Data Assimilation
- Authors: Yifei Zong, David Barajas-Solano, Alexandre M. Tartakovsky,
- Abstract summary: We propose a randomized physics-informed neural network (PINN) or rPINN method for uncertainty quantification in inverse partial differential equation (PDE) problems with noisy data.
For the linear Poisson equation, HMC and rPINN produce similar distributions, but rPINN is on average 27 times faster than HMC.
For the non-linear Poison and diffusion equations, the HMC method fails to converge because a single HMC chain cannot sample multiple modes of the posterior distribution of the PINN parameters in a reasonable amount of time.
- Score: 44.99833362998488
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We propose a randomized physics-informed neural network (PINN) or rPINN method for uncertainty quantification in inverse partial differential equation (PDE) problems with noisy data. This method is used to quantify uncertainty in the inverse PDE PINN solutions. Recently, the Bayesian PINN (BPINN) method was proposed, where the posterior distribution of the PINN parameters was formulated using the Bayes' theorem and sampled using approximate inference methods such as the Hamiltonian Monte Carlo (HMC) and variational inference (VI) methods. In this work, we demonstrate that HMC fails to converge for non-linear inverse PDE problems. As an alternative to HMC, we sample the distribution by solving the stochastic optimization problem obtained by randomizing the PINN loss function. The effectiveness of the rPINN method is tested for linear and non-linear Poisson equations, and the diffusion equation with a high-dimensional space-dependent diffusion coefficient. The rPINN method provides informative distributions for all considered problems. For the linear Poisson equation, HMC and rPINN produce similar distributions, but rPINN is on average 27 times faster than HMC. For the non-linear Poison and diffusion equations, the HMC method fails to converge because a single HMC chain cannot sample multiple modes of the posterior distribution of the PINN parameters in a reasonable amount of time.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Improvement of Bayesian PINN Training Convergence in Solving Multi-scale PDEs with Noise [34.11898314129823]
In practice, Hamiltonian Monte Carlo (HMC) used to estimate the internal parameters of BPINN often encounters troubles.
We develop a robust multi-scale Bayesian PINN (dubbed MBPINN) method by integrating multi-scale neural networks (MscaleDNN) and Bayesian inference.
Our findings indicate that the proposed method can avoid HMC failures and provide valid results.
arXiv Detail & Related papers (2024-08-18T03:20:16Z) - Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - Noise-Free Sampling Algorithms via Regularized Wasserstein Proximals [3.4240632942024685]
We consider the problem of sampling from a distribution governed by a potential function.
This work proposes an explicit score based MCMC method that is deterministic, resulting in a deterministic evolution for particles.
arXiv Detail & Related papers (2023-08-28T23:51:33Z) - Monte Carlo Neural PDE Solver for Learning PDEs via Probabilistic Representation [59.45669299295436]
We propose a Monte Carlo PDE solver for training unsupervised neural solvers.
We use the PDEs' probabilistic representation, which regards macroscopic phenomena as ensembles of random particles.
Our experiments on convection-diffusion, Allen-Cahn, and Navier-Stokes equations demonstrate significant improvements in accuracy and efficiency.
arXiv Detail & Related papers (2023-02-10T08:05:19Z) - Physics-Informed Neural Network Method for Parabolic Differential
Equations with Sharply Perturbed Initial Conditions [68.8204255655161]
We develop a physics-informed neural network (PINN) model for parabolic problems with a sharply perturbed initial condition.
Localized large gradients in the ADE solution make the (common in PINN) Latin hypercube sampling of the equation's residual highly inefficient.
We propose criteria for weights in the loss function that produce a more accurate PINN solution than those obtained with the weights selected via other methods.
arXiv Detail & Related papers (2022-08-18T05:00:24Z) - Fully probabilistic deep models for forward and inverse problems in
parametric PDEs [1.9599274203282304]
We introduce a physics-driven deep latent variable model (PDDLVM) to learn simultaneously parameter-to-solution (forward) and solution-to- parameter (inverse) maps of PDEs.
The proposed framework can be easily extended to seamlessly integrate observed data to solve inverse problems and to build generative models.
We demonstrate the efficiency and robustness of our method on finite element discretized parametric PDE problems.
arXiv Detail & Related papers (2022-08-09T15:40:53Z) - Matching Normalizing Flows and Probability Paths on Manifolds [57.95251557443005]
Continuous Normalizing Flows (CNFs) are generative models that transform a prior distribution to a model distribution by solving an ordinary differential equation (ODE)
We propose to train CNFs by minimizing probability path divergence (PPD), a novel family of divergences between the probability density path generated by the CNF and a target probability density path.
We show that CNFs learned by minimizing PPD achieve state-of-the-art results in likelihoods and sample quality on existing low-dimensional manifold benchmarks.
arXiv Detail & Related papers (2022-07-11T08:50:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.