Parametric Complexity Bounds for Approximating PDEs with Neural Networks
- URL: http://arxiv.org/abs/2103.02138v1
- Date: Wed, 3 Mar 2021 02:42:57 GMT
- Title: Parametric Complexity Bounds for Approximating PDEs with Neural Networks
- Authors: Tanya Marwah, Zachary C. Lipton, Andrej Risteski
- Abstract summary: We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scalely with the input $d$ are proportional to the parameter counts of the neural networks.
Our proof is based on constructing a neural network which simulates gradient descent in an appropriate space which converges to the solution of the PDE.
- Score: 41.46028070204925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent empirical results show that deep networks can approximate solutions to
high dimensional PDEs, seemingly escaping the curse of dimensionality. However
many open questions remain regarding the theoretical basis for such
approximations, including the number of parameters required. In this paper, we
investigate the representational power of neural networks for approximating
solutions to linear elliptic PDEs with Dirichlet Boundary conditions. We prove
that when a PDE's coefficients are representable by small neural networks, the
parameters required to approximate its solution scale polynomially with the
input dimension $d$ and are proportional to the parameter counts of the
coefficient neural networks. Our proof is based on constructing a neural
network which simulates gradient descent in an appropriate Hilbert space which
converges to the solution of the PDE. Moreover, we bound the size of the neural
network needed to represent each iterate in terms of the neural network
representing the previous iterate, resulting in a final network whose
parameters depend polynomially on $d$ and does not depend on the volume of the
domain.
Related papers
- Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations [0.0]
A variety of deep learning methods have been developed to try and solve high-dimensional PDEs by approximating the solution using a neural network.
We prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin MethodDGM.
arXiv Detail & Related papers (2023-05-10T09:20:11Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Multilevel CNNs for Parametric PDEs [0.0]
We combine concepts from multilevel solvers for partial differential equations with neural network based deep learning.
An in-depth theoretical analysis shows that the proposed architecture is able to approximate multigrid V-cycles to arbitrary precision.
We find substantial improvements over state-of-the-art deep learning-based solvers.
arXiv Detail & Related papers (2023-04-01T21:11:05Z) - Solving parametric partial differential equations with deep rectified
quadratic unit neural networks [38.16617079681564]
In this study, we investigate the expressive power of deep rectified quadratic unit (ReQU) neural networks for approximating the solution maps of parametric PDEs.
We derive an upper bound $mathcalOleft(d3log_2qlog_2 (1/ epsilon) right)$ on the size of the deep ReQU neural network required to achieve accuracy.
arXiv Detail & Related papers (2022-03-14T10:15:29Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - PDE-constrained Models with Neural Network Terms: Optimization and
Global Convergence [0.0]
Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.
We rigorously study the optimization of a class of linear elliptic PDEs with neural network terms.
We train a neural network model for an application in fluid mechanics, in which the neural network functions as a closure model for the Reynolds-averaged Navier-Stokes equations.
arXiv Detail & Related papers (2021-05-18T16:04:33Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.