Certified machine learning: A posteriori error estimation for
physics-informed neural networks
- URL: http://arxiv.org/abs/2203.17055v1
- Date: Thu, 31 Mar 2022 14:23:04 GMT
- Title: Certified machine learning: A posteriori error estimation for
physics-informed neural networks
- Authors: Birgit Hillebrecht, Benjamin Unger
- Abstract summary: PINNs are known to be robust for smaller training sets, derive better generalization problems, and are faster to train.
We show that using PINNs in comparison with purely data-driven neural networks is not only favorable for training performance but allows us to extract significant information on the quality of the approximated solution.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Physics-informed neural networks (PINNs) are one popular approach to
introduce a priori knowledge about physical systems into the learning
framework. PINNs are known to be robust for smaller training sets, derive
better generalization problems, and are faster to train. In this paper, we show
that using PINNs in comparison with purely data-driven neural networks is not
only favorable for training performance but allows us to extract significant
information on the quality of the approximated solution. Assuming that the
underlying differential equation for the PINN training is an ordinary
differential equation, we derive a rigorous upper limit on the PINN prediction
error. This bound is applicable even for input data not included in the
training phase and without any prior knowledge about the true solution.
Therefore, our a posteriori error estimation is an essential step to certify
the PINN. We apply our error estimator exemplarily to two academic toy
problems, whereof one falls in the category of model-predictive control and
thereby shows the practical use of the derived results.
Related papers
- Characteristic Performance Study on Solving Oscillator ODEs via Soft-constrained Physics-informed Neural Network with Small Data [6.3295494018089435]
This paper compares physics-informed neural network (PINN), conventional neural network (NN) and traditional numerical discretization methods on solving differential equations (DEs)
We focus on the soft-constrained PINN approach and formalized its mathematical framework and computational flow for solving Ordinary DEs and Partial DEs.
We demonstrate that the DeepXDE-based implementation of PINN is not only light code and efficient in training, but also flexible across CPU/GPU platforms.
arXiv Detail & Related papers (2024-08-19T13:02:06Z) - Correcting model misspecification in physics-informed neural networks
(PINNs) [2.07180164747172]
We present a general approach to correct the misspecified physical models in PINNs for discovering governing equations.
We employ other deep neural networks (DNNs) to model the discrepancy between the imperfect models and the observational data.
We envision that the proposed approach will extend the applications of PINNs for discovering governing equations in problems where the physico-chemical or biological processes are not well understood.
arXiv Detail & Related papers (2023-10-16T19:25:52Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Benign Overfitting in Deep Neural Networks under Lazy Training [72.28294823115502]
We show that when the data distribution is well-separated, DNNs can achieve Bayes-optimal test error for classification.
Our results indicate that interpolating with smoother functions leads to better generalization.
arXiv Detail & Related papers (2023-05-30T19:37:44Z) - Neural networks trained with SGD learn distributions of increasing
complexity [78.30235086565388]
We show that neural networks trained using gradient descent initially classify their inputs using lower-order input statistics.
We then exploit higher-order statistics only later during training.
We discuss the relation of DSB to other simplicity biases and consider its implications for the principle of universality in learning.
arXiv Detail & Related papers (2022-11-21T15:27:22Z) - Separable PINN: Mitigating the Curse of Dimensionality in
Physics-Informed Neural Networks [6.439575695132489]
Physics-informed neural networks (PINNs) have emerged as new data-driven PDE solvers for both forward and inverse problems.
We demonstrate that the computations in automatic differentiation (AD) can be significantly reduced by leveraging forward-mode AD when training PINN.
We propose a network architecture, called separable PINN (SPINN), which can facilitate forward-mode AD for more efficient computation.
arXiv Detail & Related papers (2022-11-16T08:46:52Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - A novel meta-learning initialization method for physics-informed neural
networks [6.864312468709774]
Physics-informed neural networks (PINNs) have been widely used to solve various scientific computing problems.
We propose a New Reptile initialization based Physics-Informed Neural Network (NRPINN)
Experimental results show that the NRPINN training is much faster and achieves higher accuracy than PINNs with other training methods.
arXiv Detail & Related papers (2021-07-23T01:55:23Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - On Error Correction Neural Networks for Economic Forecasting [0.0]
A class of RNNs called Error Correction Neural Networks (ECNNs) was designed to compensate for missing input variables.
It does this by feeding back in the current step the error made in the previous step.
The ECNN is implemented in Python by the computation of the appropriate gradients and it is tested on stock market predictions.
arXiv Detail & Related papers (2020-04-11T01:23:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.