Evaluating Error Bound for Physics-Informed Neural Networks on Linear
Dynamical Systems
- URL: http://arxiv.org/abs/2207.01114v1
- Date: Sun, 3 Jul 2022 20:23:43 GMT
- Title: Evaluating Error Bound for Physics-Informed Neural Networks on Linear
Dynamical Systems
- Authors: Shuheng Liu, Xiyue Huang, Pavlos Protopapas
- Abstract summary: This paper shows that one can mathematically derive explicit error bounds for physics-informed neural networks trained on a class of linear systems of differential equations.
Our work shows a link between network residuals, which is known and used as loss function, and the absolute error of solution, which is generally unknown.
- Score: 1.2891210250935146
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There have been extensive studies on solving differential equations using
physics-informed neural networks. While this method has proven advantageous in
many cases, a major criticism lies in its lack of analytical error bounds.
Therefore, it is less credible than its traditional counterparts, such as the
finite difference method. This paper shows that one can mathematically derive
explicit error bounds for physics-informed neural networks trained on a class
of linear systems of differential equations. More importantly, evaluating such
error bounds only requires evaluating the differential equation residual
infinity norm over the domain of interest. Our work shows a link between
network residuals, which is known and used as loss function, and the absolute
error of solution, which is generally unknown. Our approach is
semi-phenomonological and independent of knowledge of the actual solution or
the complexity or architecture of the network. Using the method of manufactured
solution on linear ODEs and system of linear ODEs, we empirically verify the
error evaluation algorithm and demonstrate that the actual error strictly lies
within our derived bound.
Related papers
- Exact and approximate error bounds for physics-informed neural networks [1.236974227340167]
We report important progress in calculating error bounds of physics-informed neural networks (PINNs) solutions of nonlinear first-order ODEs.
We give a general expression that describes the error of the solution that the PINN-based method provides for a nonlinear first-order ODE.
We propose a technique to calculate an approximate bound for the general case and an exact bound for a particular case.
arXiv Detail & Related papers (2024-11-21T05:15:28Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Identifiability and Asymptotics in Learning Homogeneous Linear ODE Systems from Discrete Observations [114.17826109037048]
Ordinary Differential Equations (ODEs) have recently gained a lot of attention in machine learning.
theoretical aspects, e.g., identifiability and properties of statistical estimation are still obscure.
This paper derives a sufficient condition for the identifiability of homogeneous linear ODE systems from a sequence of equally-spaced error-free observations sampled from a single trajectory.
arXiv Detail & Related papers (2022-10-12T06:46:38Z) - Neural Laplace: Learning diverse classes of differential equations in
the Laplace domain [86.52703093858631]
We propose a unified framework for learning diverse classes of differential equations (DEs) including all the aforementioned ones.
Instead of modelling the dynamics in the time domain, we model it in the Laplace domain, where the history-dependencies and discontinuities in time can be represented as summations of complex exponentials.
In the experiments, Neural Laplace shows superior performance in modelling and extrapolating the trajectories of diverse classes of DEs.
arXiv Detail & Related papers (2022-06-10T02:14:59Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Deep neural network for solving differential equations motivated by
Legendre-Galerkin approximation [16.64525769134209]
We explore the performance and accuracy of various neural architectures on both linear and nonlinear differential equations.
We implement a novel Legendre-Galerkin Deep Neural Network (LGNet) algorithm to predict solutions to differential equations.
arXiv Detail & Related papers (2020-10-24T20:25:09Z) - Unsupervised Learning of Solutions to Differential Equations with
Generative Adversarial Networks [1.1470070927586016]
We develop a novel method for solving differential equations with unsupervised neural networks.
We show that our method, which we call Differential Equation GAN (DEQGAN), can obtain multiple orders of magnitude lower mean squared errors.
arXiv Detail & Related papers (2020-07-21T23:36:36Z) - Error Estimation and Correction from within Neural Network Differential
Equation Solvers [3.04585143845864]
We describe a strategy for constructing error estimates and corrections for Neural Network Differential Equation solvers.
Our methods do not require advance knowledge of the true solutions and obtain explicit relationships between loss functions and the error associated with solution estimates.
arXiv Detail & Related papers (2020-07-09T11:01:44Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.