Efficient Bayesian Physics Informed Neural Networks for Inverse Problems
via Ensemble Kalman Inversion
- URL: http://arxiv.org/abs/2303.07392v1
- Date: Mon, 13 Mar 2023 18:15:26 GMT
- Title: Efficient Bayesian Physics Informed Neural Networks for Inverse Problems
via Ensemble Kalman Inversion
- Authors: Andrew Pensoneault and Xueyu Zhu
- Abstract summary: We present a new efficient inference algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for high-dimensional inference tasks.
We find that our proposed method can achieve inference results with informative uncertainty estimates comparable to Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational cost.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Bayesian Physics Informed Neural Networks (B-PINNs) have gained significant
attention for inferring physical parameters and learning the forward solutions
for problems based on partial differential equations. However, the
overparameterized nature of neural networks poses a computational challenge for
high-dimensional posterior inference. Existing inference approaches, such as
particle-based or variance inference methods, are either computationally
expensive for high-dimensional posterior inference or provide unsatisfactory
uncertainty estimates. In this paper, we present a new efficient inference
algorithm for B-PINNs that uses Ensemble Kalman Inversion (EKI) for
high-dimensional inference tasks. We find that our proposed method can achieve
inference results with informative uncertainty estimates comparable to
Hamiltonian Monte Carlo (HMC)-based B-PINNs with a much reduced computational
cost. These findings suggest that our proposed approach has great potential for
uncertainty quantification in physics-informed machine learning for practical
applications.
Related papers
- Bayesian Physics Informed Neural Networks for Linear Inverse problems [1.30536490219656]
Inverse problems arise in science and engineering where we need to infer on a quantity from indirect observation.
BPINN concept integrates physical laws with deep learning techniques to enhance the speed, accuracy and efficiency.
We consider two cases of supervised and unsupervised for training step, obtain the expressions of the posterior probability of the unknown variables, and deduce the posterior laws of the NN's parameters.
arXiv Detail & Related papers (2025-02-18T14:52:57Z) - PACMANN: Point Adaptive Collocation Method for Artificial Neural Networks [44.99833362998488]
PINNs minimize a loss function which includes the PDE residual determined for a set of collocation points.
Previous work has shown that the number and distribution of these collocation points have a significant influence on the accuracy of the PINN solution.
We present the Point Adaptive Collocation Method for Artificial Neural Networks (PACMANN)
arXiv Detail & Related papers (2024-11-29T11:31:11Z) - Improvement of Bayesian PINN Training Convergence in Solving Multi-scale PDEs with Noise [34.11898314129823]
In practice, Hamiltonian Monte Carlo (HMC) used to estimate the internal parameters of BPINN often encounters troubles.
We develop a robust multi-scale Bayesian PINN (dubbed MBPINN) method by integrating multi-scale neural networks (MscaleDNN) and Bayesian inference.
Our findings indicate that the proposed method can avoid HMC failures and provide valid results.
arXiv Detail & Related papers (2024-08-18T03:20:16Z) - Learning Active Subspaces for Effective and Scalable Uncertainty
Quantification in Deep Neural Networks [13.388835540131508]
We propose a novel scheme for constructing a low-dimensional subspace of the neural network parameters.
We demonstrate that the significantly reduced active subspace enables effective and scalable Bayesian inference.
Our approach provides reliable predictions with robust uncertainty estimates for various regression tasks.
arXiv Detail & Related papers (2023-09-06T15:00:36Z) - Information Bottleneck Analysis of Deep Neural Networks via Lossy Compression [37.69303106863453]
The Information Bottleneck (IB) principle offers an information-theoretic framework for analyzing the training process of deep neural networks (DNNs)
In this paper, we introduce a framework for conducting IB analysis of general NNs.
We also perform IB analysis on a close-to-real-scale, which reveals new features of the MI dynamics.
arXiv Detail & Related papers (2023-05-13T21:44:32Z) - Efficient Bayesian inference using physics-informed invertible neural
networks for inverse problems [6.97393424359704]
We introduce an innovative approach for addressing Bayesian inverse problems through the utilization of physics-informed invertible neural networks (PI-INN)
The PI-INN offers a precise and efficient generative model for Bayesian inverse problems, yielding tractable posterior density estimates.
As a particular physics-informed deep learning model, the primary training challenge for PI-INN centers on enforcing the independence constraint.
arXiv Detail & Related papers (2023-04-25T03:17:54Z) - Bayesian deep learning framework for uncertainty quantification in high
dimensions [6.282068591820945]
We develop a novel deep learning method for uncertainty quantification in partial differential equations based on Bayesian neural network (BNN) and Hamiltonian Monte Carlo (HMC)
A BNN efficiently learns the posterior distribution of the parameters in deep neural networks by performing Bayesian inference on the network parameters.
The posterior distribution is efficiently sampled using HMC to quantify uncertainties in the system.
arXiv Detail & Related papers (2022-10-21T05:20:06Z) - Single Model Uncertainty Estimation via Stochastic Data Centering [39.71621297447397]
We are interested in estimating the uncertainties of deep neural networks.
We present a striking new finding that an ensemble of neural networks with the same weight initialization, trained on datasets that are shifted by a constant bias gives rise to slightly inconsistent trained models.
We show that $Delta-$UQ's uncertainty estimates are superior to many of the current methods on a variety of benchmarks.
arXiv Detail & Related papers (2022-07-14T23:54:54Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - NTopo: Mesh-free Topology Optimization using Implicit Neural
Representations [35.07884509198916]
We present a novel machine learning approach to tackle topology optimization problems.
We use multilayer perceptrons (MLPs) to parameterize both density and displacement fields.
As we show through our experiments, a major benefit of our approach is that it enables self-supervised learning of continuous solution spaces.
arXiv Detail & Related papers (2021-02-22T05:25:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.