A semigroup method for high dimensional elliptic PDEs and eigenvalue
problems based on neural networks
- URL: http://arxiv.org/abs/2105.03480v1
- Date: Fri, 7 May 2021 19:49:06 GMT
- Title: A semigroup method for high dimensional elliptic PDEs and eigenvalue
problems based on neural networks
- Authors: Haoya Li, Lexing Ying
- Abstract summary: We propose a semigroup computation method for solving high-dimensional elliptic partial differential equations (PDEs) and the associated eigenvalue problems based on neural networks.
For the PDE problems, we reformulate the original equations as variational problems with the help of semigroup operators and then solve the variational problems with neural network (NN) parameterization.
For eigenvalue problems, a primal-dual method is proposed, resolving the constraint with a scalar dual variable.
- Score: 1.52292571922932
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we propose a semigroup method for solving high-dimensional
elliptic partial differential equations (PDEs) and the associated eigenvalue
problems based on neural networks. For the PDE problems, we reformulate the
original equations as variational problems with the help of semigroup operators
and then solve the variational problems with neural network (NN)
parameterization. The main advantages are that no mixed second-order derivative
computation is needed during the stochastic gradient descent training and that
the boundary conditions are taken into account automatically by the semigroup
operator. For eigenvalue problems, a primal-dual method is proposed, resolving
the constraint with a scalar dual variable. Numerical results are provided to
demonstrate the performance of the proposed methods.
Related papers
- A Natural Primal-Dual Hybrid Gradient Method for Adversarial Neural Network Training on Solving Partial Differential Equations [9.588717577573684]
We propose a scalable preconditioned primal hybrid gradient algorithm for solving partial differential equations (PDEs)
We compare the performance of the proposed method with several commonly used deep learning algorithms.
The numerical results suggest that the proposed method performs efficiently and robustly and converges more stably.
arXiv Detail & Related papers (2024-11-09T20:39:10Z) - Deep Backward and Galerkin Methods for the Finite State Master Equation [12.570464662548787]
This paper proposes and analyzes two neural network methods to solve the master equation for finite-state mean field games.
We prove two types of results: there exist neural networks that make the algorithms' loss functions arbitrarily small, and conversely, if the losses are small, then the neural networks are good approximations of the master equation's solution.
arXiv Detail & Related papers (2024-03-08T01:12:11Z) - An Extreme Learning Machine-Based Method for Computational PDEs in
Higher Dimensions [1.2981626828414923]
We present two effective methods for solving high-dimensional partial differential equations (PDE) based on randomized neural networks.
We present ample numerical simulations for a number of high-dimensional linear/nonlinear stationary/dynamic PDEs to demonstrate their performance.
arXiv Detail & Related papers (2023-09-13T15:59:02Z) - A Stable and Scalable Method for Solving Initial Value PDEs with Neural
Networks [52.5899851000193]
We develop an ODE based IVP solver which prevents the network from getting ill-conditioned and runs in time linear in the number of parameters.
We show that current methods based on this approach suffer from two key issues.
First, following the ODE produces an uncontrolled growth in the conditioning of the problem, ultimately leading to unacceptably large numerical errors.
arXiv Detail & Related papers (2023-04-28T17:28:18Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Learning via nonlinear conjugate gradients and depth-varying neural ODEs [5.565364597145568]
The inverse problem of supervised reconstruction of depth-variable parameters in a neural ordinary differential equation (NODE) is considered.
The proposed parameter reconstruction is done for a general first order differential equation by minimizing a cost functional.
The sensitivity problem can estimate changes in the network output under perturbation of the trained parameters.
arXiv Detail & Related papers (2022-02-11T17:00:48Z) - Message Passing Neural PDE Solvers [60.77761603258397]
We build a neural message passing solver, replacing allally designed components in the graph with backprop-optimized neural function approximators.
We show that neural message passing solvers representationally contain some classical methods, such as finite differences, finite volumes, and WENO schemes.
We validate our method on various fluid-like flow problems, demonstrating fast, stable, and accurate performance across different domain topologies, equation parameters, discretizations, etc., in 1D and 2D.
arXiv Detail & Related papers (2022-02-07T17:47:46Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Two-Layer Neural Networks for Partial Differential Equations:
Optimization and Generalization Theory [4.243322291023028]
We show that the gradient descent method can identify a global minimizer of the least-squares optimization for solving second-order linear PDEs.
We also analyze the generalization error of the least-squares optimization for second-order linear PDEs and two-layer neural networks.
arXiv Detail & Related papers (2020-06-28T22:24:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.