Certified machine learning: Rigorous a posteriori error bounds for PDE
defined PINNs
- URL: http://arxiv.org/abs/2210.03426v1
- Date: Fri, 7 Oct 2022 09:49:18 GMT
- Title: Certified machine learning: Rigorous a posteriori error bounds for PDE
defined PINNs
- Authors: Birgit Hillebrecht, Benjamin Unger
- Abstract summary: We present a rigorous upper bound on the prediction error of physics-informed neural networks.
We apply this to four problems: the transport equation, the heat equation, the Navier-Stokes equation and the Klein-Gordon equation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Prediction error quantification in machine learning has been left out of most
methodological investigations of neural networks, for both purely data-driven
and physics-informed approaches. Beyond statistical investigations and generic
results on the approximation capabilities of neural networks, we present a
rigorous upper bound on the prediction error of physics-informed neural
networks. This bound can be calculated without the knowledge of the true
solution and only with a priori available information about the characteristics
of the underlying dynamical system governed by a partial differential equation.
We apply this a posteriori error bound exemplarily to four problems: the
transport equation, the heat equation, the Navier-Stokes equation and the
Klein-Gordon equation.
Related papers
- Spectral-Bias and Kernel-Task Alignment in Physically Informed Neural
Networks [4.604003661048267]
Physically informed neural networks (PINNs) are a promising emerging method for solving differential equations.
We propose a comprehensive theoretical framework that sheds light on this important problem.
We derive an integro-differential equation that governs PINN prediction in the large data-set limit.
arXiv Detail & Related papers (2023-07-12T18:00:02Z) - Evaluating Error Bound for Physics-Informed Neural Networks on Linear
Dynamical Systems [1.2891210250935146]
This paper shows that one can mathematically derive explicit error bounds for physics-informed neural networks trained on a class of linear systems of differential equations.
Our work shows a link between network residuals, which is known and used as loss function, and the absolute error of solution, which is generally unknown.
arXiv Detail & Related papers (2022-07-03T20:23:43Z) - Certified machine learning: A posteriori error estimation for
physics-informed neural networks [0.0]
PINNs are known to be robust for smaller training sets, derive better generalization problems, and are faster to train.
We show that using PINNs in comparison with purely data-driven neural networks is not only favorable for training performance but allows us to extract significant information on the quality of the approximated solution.
arXiv Detail & Related papers (2022-03-31T14:23:04Z) - Error estimates for physics informed neural networks approximating the
Navier-Stokes equations [6.445605125467574]
We show that the underlying PDE residual can be made arbitrarily small for tanh neural networks with two hidden layers.
The total error can be estimated in terms of the training error, network size and number of quadrature points.
arXiv Detail & Related papers (2022-03-17T14:26:17Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Characterizing possible failure modes in physics-informed neural
networks [55.83255669840384]
Recent work in scientific machine learning has developed so-called physics-informed neural network (PINN) models.
We demonstrate that, while existing PINN methodologies can learn good models for relatively trivial problems, they can easily fail to learn relevant physical phenomena even for simple PDEs.
We show that these possible failure modes are not due to the lack of expressivity in the NN architecture, but that the PINN's setup makes the loss landscape very hard to optimize.
arXiv Detail & Related papers (2021-09-02T16:06:45Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Partial Differential Equations is All You Need for Generating Neural Architectures -- A Theory for Physical Artificial Intelligence Systems [40.20472268839781]
We generalize the reaction-diffusion equation in statistical physics, Schr"odinger equation in quantum mechanics, Helmholtz equation in paraxial optics.
We take finite difference method to discretize NPDE for finding numerical solution.
Basic building blocks of deep neural network architecture, including multi-layer perceptron, convolutional neural network and recurrent neural networks, are generated.
arXiv Detail & Related papers (2021-03-10T00:05:46Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.