Variational Physics Informed Neural Networks: the role of quadratures
and test functions
- URL: http://arxiv.org/abs/2109.02035v1
- Date: Sun, 5 Sep 2021 10:06:35 GMT
- Title: Variational Physics Informed Neural Networks: the role of quadratures
and test functions
- Authors: Stefano Berrone, Claudio Canuto and Moreno Pintore
- Abstract summary: We analyze how Gaussian or Newton-Cotes quadrature rules of different precisions and piecewise test functions of different degrees affect the convergence rate of Variational Physics Informed Neural Networks (VPINN)
Using a Petrov-Galerkin framework relying on an inf-sup condition, we derive an a priori error estimate in the energy norm between the exact solution and a suitable high-order piecewise interpolant of a computed neural network.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this work we analyze how Gaussian or Newton-Cotes quadrature rules of
different precisions and piecewise polynomial test functions of different
degrees affect the convergence rate of Variational Physics Informed Neural
Networks (VPINN) with respect to mesh refinement, while solving elliptic
boundary-value problems. Using a Petrov-Galerkin framework relying on an
inf-sup condition, we derive an a priori error estimate in the energy norm
between the exact solution and a suitable high-order piecewise interpolant of a
computed neural network. Numerical experiments confirm the theoretical
predictions, and also indicate that the error decay follows the same behavior
when the neural network is not interpolated. Our results suggest, somehow
counterintuitively, that for smooth solutions the best strategy to achieve a
high decay rate of the error consists in choosing test functions of the lowest
polynomial degree, while using quadrature formulas of suitably high precision.
Related papers
- A new approach to generalisation error of machine learning algorithms:
Estimates and convergence [0.0]
We introduce a new approach to the estimation of the (generalisation) error and to convergence.
Our results include estimates of the error without any structural assumption on the neural networks.
arXiv Detail & Related papers (2023-06-23T20:57:31Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Tunable Complexity Benchmarks for Evaluating Physics-Informed Neural
Networks on Coupled Ordinary Differential Equations [64.78260098263489]
In this work, we assess the ability of physics-informed neural networks (PINNs) to solve increasingly-complex coupled ordinary differential equations (ODEs)
We show that PINNs eventually fail to produce correct solutions to these benchmarks as their complexity increases.
We identify several reasons why this may be the case, including insufficient network capacity, poor conditioning of the ODEs, and high local curvature, as measured by the Laplacian of the PINN loss.
arXiv Detail & Related papers (2022-10-14T15:01:32Z) - Semi-analytic PINN methods for singularly perturbed boundary value
problems [0.8594140167290099]
We propose a new semi-analytic physics informed neural network (PINN) to solve singularly perturbed boundary value problems.
The PINN is a scientific machine learning framework that offers a promising perspective for finding numerical solutions to partial differential equations.
arXiv Detail & Related papers (2022-08-19T04:26:40Z) - Physics-Informed Neural Networks for Quantum Eigenvalue Problems [1.2891210250935146]
Eigenvalue problems are critical to several fields of science and engineering.
We use unsupervised neural networks for discovering eigenfunctions and eigenvalues for differential eigenvalue problems.
The network optimization is data-free and depends solely on the predictions of the neural network.
arXiv Detail & Related papers (2022-02-24T18:29:39Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - Meta-Solver for Neural Ordinary Differential Equations [77.8918415523446]
We investigate how the variability in solvers' space can improve neural ODEs performance.
We show that the right choice of solver parameterization can significantly affect neural ODEs models in terms of robustness to adversarial attacks.
arXiv Detail & Related papers (2021-03-15T17:26:34Z) - Physics-Informed Neural Network Method for Solving One-Dimensional
Advection Equation Using PyTorch [0.0]
PINNs approach allows training neural networks while respecting the PDEs as a strong constraint in the optimization.
In standard small-scale circulation simulations, it is shown that the conventional approach incorporates a pseudo diffusive effect that is almost as large as the effect of the turbulent diffusion model.
Of all the schemes tested, only the PINNs approximation accurately predicted the outcome.
arXiv Detail & Related papers (2021-03-15T05:39:17Z) - Neural Network Approximations of Compositional Functions With
Applications to Dynamical Systems [3.660098145214465]
We develop an approximation theory for compositional functions and their neural network approximations.
We identify a set of key features of compositional functions and the relationship between the features and the complexity of neural networks.
In addition to function approximations, we prove several formulae of error upper bounds for neural networks.
arXiv Detail & Related papers (2020-12-03T04:40:25Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.