Error-Aware B-PINNs: Improving Uncertainty Quantification in Bayesian
Physics-Informed Neural Networks
- URL: http://arxiv.org/abs/2212.06965v1
- Date: Wed, 14 Dec 2022 01:15:26 GMT
- Title: Error-Aware B-PINNs: Improving Uncertainty Quantification in Bayesian
Physics-Informed Neural Networks
- Authors: Olga Graf, Pablo Flores, Pavlos Protopapas, Karim Pichara
- Abstract summary: Uncertainty Quantification (UQ) is just beginning to emerge in the context of PINNs.
We propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the discrepancy between the B-PINN solution and the unknown true solution.
We exploit recent results on error bounds for PINNs on linear dynamical systems and demonstrate the predictive uncertainty on a class of linear ODEs.
- Score: 2.569295887779268
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-Informed Neural Networks (PINNs) are gaining popularity as a method
for solving differential equations. While being more feasible in some contexts
than the classical numerical techniques, PINNs still lack credibility. A remedy
for that can be found in Uncertainty Quantification (UQ) which is just
beginning to emerge in the context of PINNs. Assessing how well the trained
PINN complies with imposed differential equation is the key to tackling
uncertainty, yet there is lack of comprehensive methodology for this task. We
propose a framework for UQ in Bayesian PINNs (B-PINNs) that incorporates the
discrepancy between the B-PINN solution and the unknown true solution. We
exploit recent results on error bounds for PINNs on linear dynamical systems
and demonstrate the predictive uncertainty on a class of linear ODEs.
Related papers
- Improving PINNs By Algebraic Inclusion of Boundary and Initial Conditions [0.1874930567916036]
"AI for Science" aims to solve fundamental scientific problems using AI techniques.
In this work we explore the possibility of changing the model being trained from being just a neural network to being a non-linear transformation of it.
This reduces the number of terms in the loss function than the standard PINN losses.
arXiv Detail & Related papers (2024-07-30T11:19:48Z) - Conformalized Physics-Informed Neural Networks [0.8437187555622164]
We introduce Conformalized PINNs (C-PINNs) to quantify the uncertainty of PINNs.
C-PINNs utilize the framework of conformal prediction to quantify the uncertainty of PINNs.
arXiv Detail & Related papers (2024-05-13T18:45:25Z) - The #DNN-Verification Problem: Counting Unsafe Inputs for Deep Neural
Networks [94.63547069706459]
#DNN-Verification problem involves counting the number of input configurations of a DNN that result in a violation of a safety property.
We propose a novel approach that returns the exact count of violations.
We present experimental results on a set of safety-critical benchmarks.
arXiv Detail & Related papers (2023-01-17T18:32:01Z) - Failure-informed adaptive sampling for PINNs [5.723850818203907]
Physics-informed neural networks (PINNs) have emerged as an effective technique for solving PDEs in a wide range of domains.
Recent research has demonstrated, however, that the performance of PINNs can vary dramatically with different sampling procedures.
We present an adaptive approach termed failure-informed PINNs, which is inspired by the viewpoint of reliability analysis.
arXiv Detail & Related papers (2022-10-01T13:34:41Z) - Revisiting PINNs: Generative Adversarial Physics-informed Neural
Networks and Point-weighting Method [70.19159220248805]
Physics-informed neural networks (PINNs) provide a deep learning framework for numerically solving partial differential equations (PDEs)
We propose the generative adversarial neural network (GA-PINN), which integrates the generative adversarial (GA) mechanism with the structure of PINNs.
Inspired from the weighting strategy of the Adaboost method, we then introduce a point-weighting (PW) method to improve the training efficiency of PINNs.
arXiv Detail & Related papers (2022-05-18T06:50:44Z) - Competitive Physics Informed Networks [8.724433470897763]
PINNs solve partial differential equations (PDEs) by representing them as neural networks.
We formulate and test an adversarial approach called competitive PINNs (CPINNs) to overcome this limitation.
CPINNs train a discriminator that is rewarded for predicting PINN mistakes.
Numerical experiments show that a CPINN trained with competitive gradient descent can achieve two orders of magnitude smaller than that of a PINN trained with Adam or gradient descent.
arXiv Detail & Related papers (2022-04-23T22:01:37Z) - Improved Training of Physics-Informed Neural Networks with Model
Ensembles [81.38804205212425]
We propose to expand the solution interval gradually to make the PINN converge to the correct solution.
All ensemble members converge to the same solution in the vicinity of observed data.
We show experimentally that the proposed method can improve the accuracy of the found solution.
arXiv Detail & Related papers (2022-04-11T14:05:34Z) - Certified machine learning: A posteriori error estimation for
physics-informed neural networks [0.0]
PINNs are known to be robust for smaller training sets, derive better generalization problems, and are faster to train.
We show that using PINNs in comparison with purely data-driven neural networks is not only favorable for training performance but allows us to extract significant information on the quality of the approximated solution.
arXiv Detail & Related papers (2022-03-31T14:23:04Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.