B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and
Inverse PDE Problems with Noisy Data
- URL: http://arxiv.org/abs/2003.06097v1
- Date: Fri, 13 Mar 2020 04:00:42 GMT
- Title: B-PINNs: Bayesian Physics-Informed Neural Networks for Forward and
Inverse PDE Problems with Noisy Data
- Authors: Liu Yang, Xuhui Meng, George Em Karniadakis
- Abstract summary: We propose a physics-informed neural network (B-PINN) to solve both forward and inverse nonlinear problems.
B-PINNs make use of both physical laws and scattered noisy measurements to provide predictions and quantify the aleatoric uncertainty.
- Score: 7.33020629757864
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a Bayesian physics-informed neural network (B-PINN) to solve both
forward and inverse nonlinear problems described by partial differential
equations (PDEs) and noisy data. In this Bayesian framework, the Bayesian
neural network (BNN) combined with a PINN for PDEs serves as the prior while
the Hamiltonian Monte Carlo (HMC) or the variational inference (VI) could serve
as an estimator of the posterior. B-PINNs make use of both physical laws and
scattered noisy measurements to provide predictions and quantify the aleatoric
uncertainty arising from the noisy data in the Bayesian framework. Compared
with PINNs, in addition to uncertainty quantification, B-PINNs obtain more
accurate predictions in scenarios with large noise due to their capability of
avoiding overfitting. We conduct a systematic comparison between the two
different approaches for the B-PINN posterior estimation (i.e., HMC or VI),
along with dropout used for quantifying uncertainty in deep neural networks.
Our experiments show that HMC is more suitable than VI for the B-PINNs
posterior estimation, while dropout employed in PINNs can hardly provide
accurate predictions with reasonable uncertainty. Finally, we replace the BNN
in the prior with a truncated Karhunen-Lo\`eve (KL) expansion combined with HMC
or a deep normalizing flow (DNF) model as posterior estimators. The KL is as
accurate as BNN and much faster but this framework cannot be easily extended to
high-dimensional problems unlike the BNN based framework.
Related papers
- Be Bayesian by Attachments to Catch More Uncertainty [27.047781689062944]
We propose a new Bayesian Neural Network with an Attached structure (ABNN) to catch more uncertainty from out-of-distribution (OOD) data.
ABNN is composed of an expectation module and several distribution modules.
arXiv Detail & Related papers (2023-10-19T07:28:39Z) - Single-shot Bayesian approximation for neural networks [0.0]
Deep neural networks (NNs) are known for their high-prediction performances.
NNs are prone to yield unreliable predictions when encountering completely new situations without indicating their uncertainty.
We present a single-shot MC dropout approximation that preserves the advantages of BNNs while being as fast as NNs.
arXiv Detail & Related papers (2023-08-24T13:40:36Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Constraining cosmological parameters from N-body simulations with
Variational Bayesian Neural Networks [0.0]
Multiplicative normalizing flows (MNFs) are a family of approximate posteriors for the parameters of BNNs.
We have compared MNFs with respect to the standard BNNs, and the flipout estimator.
MNFs provide more realistic predictive distribution closer to the true posterior mitigating the bias introduced by the variational approximation.
arXiv Detail & Related papers (2023-01-09T16:07:48Z) - Error-Aware B-PINNs: Improving Uncertainty Quantification in Bayesian
Physics-Informed Neural Networks [2.569295887779268]
Uncertainty Quantification (UQ) is just beginning to emerge in the context of PINNs.
We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution.
We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
arXiv Detail & Related papers (2022-12-14T01:15:26Z) - Bayesian Physics Informed Neural Networks for Data Assimilation and
Spatio-Temporal Modelling of Wildfires [11.00425904688764]
We use the PINN to solve the level-set equation, which is a partial differential equation that models a fire-front through the zero-level-set of a level-set function.
We show that popular cost functions can fail to maintain temporal continuity in modelled fire-fronts when there are extreme changes in forcing variables.
We develop an approach to perform data assimilation within the PINN such that the modelled PIN predictions are drawn towards observations of the fire-front.
arXiv Detail & Related papers (2022-12-02T05:00:41Z) - Variational Neural Networks [88.24021148516319]
We propose a method for uncertainty estimation in neural networks called Variational Neural Network (VNN)
VNN generates parameters for the output distribution of a layer by transforming its inputs with learnable sub-layers.
In uncertainty quality estimation experiments, we show that VNNs achieve better uncertainty quality than Monte Carlo Dropout or Bayes By Backpropagation methods.
arXiv Detail & Related papers (2022-07-04T15:41:02Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z) - An Infinite-Feature Extension for Bayesian ReLU Nets That Fixes Their
Asymptotic Overconfidence [65.24701908364383]
A Bayesian treatment can mitigate overconfidence in ReLU nets around the training data.
But far away from them, ReLU neural networks (BNNs) can still underestimate uncertainty and thus be overconfident.
We show that it can be applied emphpost-hoc to any pre-trained ReLU BNN at a low cost.
arXiv Detail & Related papers (2020-10-06T13:32:18Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z) - Exact posterior distributions of wide Bayesian neural networks [51.20413322972014]
We show that the exact BNN posterior converges (weakly) to the one induced by the GP limit of the prior.
For empirical validation, we show how to generate exact samples from a finite BNN on a small dataset via rejection sampling.
arXiv Detail & Related papers (2020-06-18T13:57:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.