Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations
- URL: http://arxiv.org/abs/2305.06000v1
- Date: Wed, 10 May 2023 09:20:11 GMT
- Title: Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations
- Authors: Deqing Jiang, Justin Sirignano, Samuel N. Cohen
- Abstract summary: A variety of deep learning methods have been developed to try and solve high-dimensional PDEs by approximating the solution using a neural network.
We prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin MethodDGM.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Numerically solving high-dimensional partial differential equations (PDEs) is
a major challenge. Conventional methods, such as finite difference methods, are
unable to solve high-dimensional PDEs due to the curse-of-dimensionality. A
variety of deep learning methods have been recently developed to try and solve
high-dimensional PDEs by approximating the solution using a neural network. In
this paper, we prove global convergence for one of the commonly-used deep
learning algorithms for solving PDEs, the Deep Galerkin Method (DGM). DGM
trains a neural network approximator to solve the PDE using stochastic gradient
descent. We prove that, as the number of hidden units in the single-layer
network goes to infinity (i.e., in the ``wide network limit"), the trained
neural network converges to the solution of an infinite-dimensional linear
ordinary differential equation (ODE). The PDE residual of the limiting
approximator converges to zero as the training time $\rightarrow \infty$. Under
mild assumptions, this convergence also implies that the neural network
approximator converges to the solution of the PDE. A closely related class of
deep learning methods for PDEs is Physics Informed Neural Networks (PINNs).
Using the same mathematical techniques, we can prove a similar global
convergence result for the PINN neural network approximators. Both proofs
require analyzing a kernel function in the limit ODE governing the evolution of
the limit neural network approximator. A key technical challenge is that the
kernel function, which is a composition of the PDE operator and the neural
tangent kernel (NTK) operator, lacks a spectral gap, therefore requiring a
careful analysis of its properties.
Related papers
- Solving partial differential equations with sampled neural networks [1.8590821261905535]
Approximation of solutions to partial differential equations (PDE) is an important problem in computational science and engineering.
We discuss how sampling the hidden weights and biases of the ansatz network from data-agnostic and data-dependent probability distributions allows us to progress on both challenges.
arXiv Detail & Related papers (2024-05-31T14:24:39Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Neural Q-learning for solving PDEs [0.0]
We develop a new numerical method for solving elliptic-type PDEs by adapting the Q-learning algorithm in reinforcement learning.
Our "Q-PDE" algorithm is mesh-free and therefore has the potential to overcome the curse of dimensionality.
The numerical performance of the Q-PDE algorithm is studied for several elliptic PDEs.
arXiv Detail & Related papers (2022-03-31T15:52:44Z) - Solving PDEs on Unknown Manifolds with Machine Learning [8.220217498103315]
This paper presents a mesh-free computational framework and machine learning theory for solving elliptic PDEs on unknown manifold.
We show that the proposed NN solver can robustly generalize the PDE on new data points with errors that are almost identical to generalizations on new data points.
arXiv Detail & Related papers (2021-06-12T03:55:15Z) - PDE-constrained Models with Neural Network Terms: Optimization and
Global Convergence [0.0]
Recent research has used deep learning to develop partial differential equation (PDE) models in science and engineering.
We rigorously study the optimization of a class of linear elliptic PDEs with neural network terms.
We train a neural network model for an application in fluid mechanics, in which the neural network functions as a closure model for the Reynolds-averaged Navier-Stokes equations.
arXiv Detail & Related papers (2021-05-18T16:04:33Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - On the convergence of physics informed neural networks for linear
second-order elliptic and parabolic type PDEs [0.0]
Physics informed neural networks (PINNs) are deep learning based techniques for solving partial differential equations (PDEs)
We show that the sequence of minimizers strongly converges to the PDE solution in $C0$.
To the best of our knowledge, this is the first theoretical work that shows the consistency of PINNs.
arXiv Detail & Related papers (2020-04-03T22:59:25Z) - Neural Operator: Graph Kernel Network for Partial Differential Equations [57.90284928158383]
This work is to generalize neural networks so that they can learn mappings between infinite-dimensional spaces (operators)
We formulate approximation of the infinite-dimensional mapping by composing nonlinear activation functions and a class of integral operators.
Experiments confirm that the proposed graph kernel network does have the desired properties and show competitive performance compared to the state of the art solvers.
arXiv Detail & Related papers (2020-03-07T01:56:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.