Efficient Bayesian inference using physics-informed invertible neural
networks for inverse problems
- URL: http://arxiv.org/abs/2304.12541v3
- Date: Tue, 3 Oct 2023 03:11:35 GMT
- Title: Efficient Bayesian inference using physics-informed invertible neural
networks for inverse problems
- Authors: Xiaofei Guan, Xintong Wang, Hao Wu, Zihao Yang and Peng Yu
- Abstract summary: We introduce an innovative approach for addressing Bayesian inverse problems through the utilization of physics-informed invertible neural networks (PI-INN)
The PI-INN offers a precise and efficient generative model for Bayesian inverse problems, yielding tractable posterior density estimates.
As a particular physics-informed deep learning model, the primary training challenge for PI-INN centers on enforcing the independence constraint.
- Score: 6.97393424359704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we introduce an innovative approach for addressing Bayesian
inverse problems through the utilization of physics-informed invertible neural
networks (PI-INN). The PI-INN framework encompasses two sub-networks: an
invertible neural network (INN) and a neural basis network (NB-Net). The
primary role of the NB-Net lies in modeling the spatial basis functions
characterizing the solution to the forward problem dictated by the underlying
partial differential equation. Simultaneously, the INN is designed to partition
the parameter vector linked to the input physical field into two distinct
components: the expansion coefficients representing the forward problem
solution and the Gaussian latent noise. If the forward mapping is precisely
estimated, and the statistical independence between expansion coefficients and
latent noise is well-maintained, the PI-INN offers a precise and efficient
generative model for Bayesian inverse problems, yielding tractable posterior
density estimates. As a particular physics-informed deep learning model, the
primary training challenge for PI-INN centers on enforcing the independence
constraint, which we tackle by introducing a novel independence loss based on
estimated density. We support the efficacy and precision of the proposed PI-INN
through a series of numerical experiments, including inverse kinematics,
1-dimensional and 2-dimensional diffusion equations, and seismic traveltime
tomography. Specifically, our experimental results showcase the superior
performance of the proposed independence loss in comparison to the commonly
used but computationally demanding kernel-based maximum mean discrepancy loss.
Related papers
- General-Kindred Physics-Informed Neural Network to the Solutions of Singularly Perturbed Differential Equations [11.121415128908566]
We propose the General-Kindred Physics-Informed Neural Network (GKPINN) for solving Singular Perturbation Differential Equations (SPDEs)
This approach utilizes prior knowledge of the boundary layer from the equation and establishes a novel network to assist PINN in approxing the boundary layer.
The research findings underscore the exceptional performance of our novel approach, GKPINN, which delivers a remarkable enhancement in reducing the $L$ error by two to four orders of magnitude compared to the established PINN methodology.
arXiv Detail & Related papers (2024-08-27T02:03:22Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - On the uncertainty analysis of the data-enabled physics-informed neural
network for solving neutron diffusion eigenvalue problem [4.0275959184316825]
We investigate the performance of DEPINN in calculating the neutron diffusion eigenvalue problem from several perspectives.
In order to reduce the effect of noise and improve the utilization of the noisy prior data, we propose innovative interval loss functions.
This paper confirms the feasibility of the improved DEPINN for practical engineering applications in nuclear reactor physics.
arXiv Detail & Related papers (2023-03-15T08:59:03Z) - Momentum Diminishes the Effect of Spectral Bias in Physics-Informed
Neural Networks [72.09574528342732]
Physics-informed neural network (PINN) algorithms have shown promising results in solving a wide range of problems involving partial differential equations (PDEs)
They often fail to converge to desirable solutions when the target function contains high-frequency features, due to a phenomenon known as spectral bias.
In the present work, we exploit neural tangent kernels (NTKs) to investigate the training dynamics of PINNs evolving under gradient descent with momentum (SGDM)
arXiv Detail & Related papers (2022-06-29T19:03:10Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Robust Learning of Physics Informed Neural Networks [2.86989372262348]
Physics-informed Neural Networks (PINNs) have been shown to be effective in solving partial differential equations.
This paper shows that a PINN can be sensitive to errors in training data and overfit itself in dynamically propagating these errors over the domain of the solution of the PDE.
arXiv Detail & Related papers (2021-10-26T00:10:57Z) - Multi-fidelity Bayesian Neural Networks: Algorithms and Applications [0.0]
We propose a new class of Bayesian neural networks (BNNs) that can be trained using noisy data of variable fidelity.
We apply them to learn function approximations as well as to solve inverse problems based on partial differential equations (PDEs)
arXiv Detail & Related papers (2020-12-19T02:03:53Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.