Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating a class of inverse problems for PDEs
- URL: http://arxiv.org/abs/2007.01138v3
- Date: Wed, 6 Dec 2023 09:06:19 GMT
- Title: Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating a class of inverse problems for PDEs
- Authors: Siddhartha Mishra and Roberto Molinaro
- Abstract summary: We focus on a particular class of inverse problems, the so-called data assimilation or unique continuation problems.
We prove rigorous estimates on the generalization error of PINNs approximating them.
- Score: 16.758334184623152
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physics informed neural networks (PINNs) have recently been very successfully
applied for efficiently approximating inverse problems for PDEs. We focus on a
particular class of inverse problems, the so-called data assimilation or unique
continuation problems, and prove rigorous estimates on the generalization error
of PINNs approximating them. An abstract framework is presented and conditional
stability estimates for the underlying inverse problem are employed to derive
the estimate on the PINN generalization error, providing rigorous justification
for the use of PINNs in this context. The abstract framework is illustrated
with examples of four prototypical linear PDEs. Numerical experiments,
validating the proposed theory, are also presented.
Related papers
- ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - Physics-Informed Deep Inverse Operator Networks for Solving PDE Inverse Problems [1.9490282165104331]
Inverse problems involving partial differential equations (PDEs) can be seen as discovering a mapping from measurement data to unknown quantities.
Existing methods typically rely on large amounts of labeled training data, which is impractical for most real-world applications.
We propose a novel architecture called Physics-Informed Deep Inverse Operator Networks (PI-DIONs) which can learn the solution operator of PDE-based inverse problems without labeled training data.
arXiv Detail & Related papers (2024-12-04T09:38:58Z) - Representation and Regression Problems in Neural Networks: Relaxation, Generalization, and Numerics [5.915970073098098]
We address three non-dimensional optimization problems associated with training shallow neural networks (NNs)
We convexify these problems and representation, applying a representer gradient to prove the absence relaxation gaps.
We analyze the impact of key parameters on these bounds, propose optimal choices.
For high-dimensional datasets, we propose a sparsification algorithm that, combined with gradient descent, yields effective solutions.
arXiv Detail & Related papers (2024-12-02T15:40:29Z) - Beyond Derivative Pathology of PINNs: Variable Splitting Strategy with Convergence Analysis [6.468495781611434]
Physics-informed neural networks (PINNs) have emerged as effective methods for solving partial differential equations (PDEs) in various problems.
In this study, we prove that PINNs encounter a fundamental issue that the premise is invalid.
We propose a textitvariable splitting strategy that addresses this issue by parameterizing the gradient of the solution as an auxiliary variable.
arXiv Detail & Related papers (2024-09-30T15:20:10Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - A PAC-Bayesian Perspective on the Interpolating Information Criterion [54.548058449535155]
We show how a PAC-Bayes bound is obtained for a general class of models, characterizing factors which influence performance in the interpolating regime.
We quantify how the test error for overparameterized models achieving effectively zero training error depends on the quality of the implicit regularization imposed by e.g. the combination of model, parameter-initialization scheme.
arXiv Detail & Related papers (2023-11-13T01:48:08Z) - Lie Point Symmetry and Physics Informed Networks [59.56218517113066]
We propose a loss function that informs the network about Lie point symmetries in the same way that PINN models try to enforce the underlying PDE through a loss function.
Our symmetry loss ensures that the infinitesimal generators of the Lie group conserve the PDE solutions.
Empirical evaluations indicate that the inductive bias introduced by the Lie point symmetries of the PDEs greatly boosts the sample efficiency of PINNs.
arXiv Detail & Related papers (2023-11-07T19:07:16Z) - Generalization of Neural Combinatorial Solvers Through the Lens of
Adversarial Robustness [68.97830259849086]
Most datasets only capture a simpler subproblem and likely suffer from spurious features.
We study adversarial robustness - a local generalization property - to reveal hard, model-specific instances and spurious features.
Unlike in other applications, where perturbation models are designed around subjective notions of imperceptibility, our perturbation models are efficient and sound.
Surprisingly, with such perturbations, a sufficiently expressive neural solver does not suffer from the limitations of the accuracy-robustness trade-off common in supervised learning.
arXiv Detail & Related papers (2021-10-21T07:28:11Z) - Efficient Semi-Implicit Variational Inference [65.07058307271329]
We propose an efficient and scalable semi-implicit extrapolational (SIVI)
Our method maps SIVI's evidence to a rigorous inference of lower gradient values.
arXiv Detail & Related papers (2021-01-15T11:39:09Z) - Advantage of Deep Neural Networks for Estimating Functions with
Singularity on Hypersurfaces [23.21591478556582]
We develop a minimax rate analysis to describe the reason that deep neural networks (DNNs) perform better than other standard methods.
This study tries to fill this gap by considering the estimation for a class of non-smooth functions that have singularities on hypersurfaces.
arXiv Detail & Related papers (2020-11-04T12:51:14Z) - Estimates on the generalization error of Physics Informed Neural
Networks (PINNs) for approximating PDEs [16.758334184623152]
We provide rigorous upper bounds on the generalization error of PINNs approximating solutions of the forward problem for PDEs.
An abstract formalism is introduced and stability properties of the underlying PDE are leveraged to derive an estimate for the generalization error.
arXiv Detail & Related papers (2020-06-29T16:05:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.