Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating a class of inverse problems for PDEs
- URL: http://arxiv.org/abs/2007.01138v3
- Date: Wed, 6 Dec 2023 09:06:19 GMT
- Title: Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating a class of inverse problems for PDEs
- Authors: Siddhartha Mishra and Roberto Molinaro
- Abstract summary: We focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems.
We prove rigorous estimates on the generalization error of PINNs approximating them.
- Score: 16.758334184623152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics informed neural networks (PINNs) have recently been very successfully
applied for efficiently approximating inverse problems for PDEs. We focus on a
particular class of inverse problems, the so-called data assimilation or unique
continuation problems, and prove rigorous estimates on the generalization error
of PINNs approximating them. An abstract framework is presented and conditional
stability estimates for the underlying inverse problem are employed to derive
the estimate on the PINN generalization error, providing rigorous justification
for the use of PINNs in this context. The abstract framework is illustrated
with examples of four prototypical linear PDEs. Numerical experiments,
validating the proposed theory, are also presented.
Related papers
- Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis [6.468495781611434]
Physics-informed neural networks (PINNs) have emerged as effective methods for solving partial differential equations (PDEs) in various problems.
In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid.
We propose a textitvariable splitting strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable.
arXiv Detail & Related papers (2024-09-30T15:20:10Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Physics-Aware Neural Networks for Boundary Layer Linear Problems [0.0]
Physics-Informed Neural Networks (PINNs) approximate the solution of general partial differential equations (PDEs) by adding them in some form as terms of the loss/cost of a Neural Network.
This paper explores PINNs for linear PDEs whose solutions may present one or more boundary layers.
arXiv Detail & Related papers (2022-07-15T21:15:06Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Error analysis for physics informed neural networks (PINNs)
approximating Kolmogorov PDEs [0.0]
We derive rigorous bounds on the error incurred by PINNs in approximating the solutions of a large class of parabolic PDEs.
We construct neural networks, whose PINN residual (generalization error) can be made as small as desired.
These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs.
arXiv Detail & Related papers (2021-06-28T08:37:56Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Advantage of Deep Neural Networks for Estimating Functions with
Singularity on Hypersurfaces [23.21591478556582]
We develop a minimax rate analysis to describe the reason that deep neural networks (DNNs) perform better than other standard methods.
This study tries to fill this gap by considering the estimation for a class of non-smooth functions that have singularities on hypersurfaces.
arXiv Detail & Related papers (2020-11-04T12:51:14Z) - Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating PDEs [16.758334184623152]
We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs.
An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error.
arXiv Detail & Related papers (2020-06-29T16:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.