Error Estimation for Physics-informed Neural Networks Approximating
Semilinear Wave Equations
- URL: http://arxiv.org/abs/2402.07153v2
- Date: Wed, 6 Mar 2024 00:26:02 GMT
- Title: Error Estimation for Physics-informed Neural Networks Approximating
Semilinear Wave Equations
- Authors: Beatrice Lorenz, Aras Bacho, Gitta Kutyniok
- Abstract summary: This paper provides rigorous error bounds for physics-informed neural networks approximating the semilinear wave equation.
We provide bounds for the generalization and training error in terms of the width of the network's layers and the number of training points for a tanh neural network with two hidden layers.
- Score: 15.834703258232006
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper provides rigorous error bounds for physics-informed neural
networks approximating the semilinear wave equation. We provide bounds for the
generalization and training error in terms of the width of the network's layers
and the number of training points for a tanh neural network with two hidden
layers. Our main result is a bound of the total error in the
$H^1([0,T];L^2(\Omega))$-norm in terms of the training error and the number of
training points, which can be made arbitrarily small under some assumptions. We
illustrate our theoretical bounds with numerical experiments.
Related papers
- Fundamental limits of overparametrized shallow neural networks for
supervised learning [11.136777922498355]
We study a two-layer neural network trained from input-output pairs generated by a teacher network with matching architecture.
Our results come in the form of bounds relating i) the mutual information between training data and network weights, or ii) the Bayes-optimal generalization error.
arXiv Detail & Related papers (2023-07-11T08:30:50Z) - Error analysis for deep neural network approximations of parametric
hyperbolic conservation laws [7.6146285961466]
We show that the approximation error can be made as small as desired with ReLU neural networks.
We provide an explicit upper bound on the generalization error in terms of the training error, number of training samples and the neural network size.
arXiv Detail & Related papers (2022-07-15T09:21:09Z) - Error estimates for physics informed neural networks approximating the
Navier-Stokes equations [6.445605125467574]
We show that the underlying PDE residual can be made arbitrarily small for tanh neural networks with two hidden layers.
The total error can be estimated in terms of the training error, network size and number of quadrature points.
arXiv Detail & Related papers (2022-03-17T14:26:17Z) - Mean-field Analysis of Piecewise Linear Solutions for Wide ReLU Networks [83.58049517083138]
We consider a two-layer ReLU network trained via gradient descent.
We show that SGD is biased towards a simple solution.
We also provide empirical evidence that knots at locations distinct from the data points might occur.
arXiv Detail & Related papers (2021-11-03T15:14:20Z) - Why Lottery Ticket Wins? A Theoretical Perspective of Sample Complexity
on Pruned Neural Networks [79.74580058178594]
We analyze the performance of training a pruned neural network by analyzing the geometric structure of the objective function.
We show that the convex region near a desirable model with guaranteed generalization enlarges as the neural network model is pruned.
arXiv Detail & Related papers (2021-10-12T01:11:07Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Conditional physics informed neural networks [85.48030573849712]
We introduce conditional PINNs (physics informed neural networks) for estimating the solution of classes of eigenvalue problems.
We show that a single deep neural network can learn the solution of partial differential equations for an entire class of problems.
arXiv Detail & Related papers (2021-04-06T18:29:14Z) - Vector-output ReLU Neural Network Problems are Copositive Programs:
Convex Analysis of Two Layer Networks and Polynomial-time Algorithms [29.975118690758126]
We describe the semi-output global dual of the two-layer vector-infinite ReLU neural network training problem.
We provide a solution which is guaranteed to be exact for certain classes of problems.
arXiv Detail & Related papers (2020-12-24T17:03:30Z) - Higher-order Quasi-Monte Carlo Training of Deep Neural Networks [0.0]
We present a novel algorithmic approach and an error analysis leveraging Quasi-Monte Carlo points for training deep neural network (DNN) surrogates of Data-to-Observable (DtO) maps in engineering design.
arXiv Detail & Related papers (2020-09-06T11:31:42Z) - Neural Networks are Convex Regularizers: Exact Polynomial-time Convex
Optimization Formulations for Two-layer Networks [70.15611146583068]
We develop exact representations of training two-layer neural networks with rectified linear units (ReLUs)
Our theory utilizes semi-infinite duality and minimum norm regularization.
arXiv Detail & Related papers (2020-02-24T21:32:41Z) - A Generalized Neural Tangent Kernel Analysis for Two-layer Neural
Networks [87.23360438947114]
We show that noisy gradient descent with weight decay can still exhibit a " Kernel-like" behavior.
This implies that the training loss converges linearly up to a certain accuracy.
We also establish a novel generalization error bound for two-layer neural networks trained by noisy gradient descent with weight decay.
arXiv Detail & Related papers (2020-02-10T18:56:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.