Solving parametric partial differential equations with deep rectified
quadratic unit neural networks
- URL: http://arxiv.org/abs/2203.06973v1
- Date: Mon, 14 Mar 2022 10:15:29 GMT
- Title: Solving parametric partial differential equations with deep rectified
quadratic unit neural networks
- Authors: Zhen Lei, Lei Shi, Chenyu Zeng
- Abstract summary: In this study, we investigate the expressive power of deep rectified quadratic unit (ReQU) neural networks for approximating the solution maps of parametric PDEs.
We derive an upper bound $mathcalOleft(d3log_2qlog_2 (1/ epsilon) right)$ on the size of the deep ReQU neural network required to achieve accuracy.
- Score: 38.16617079681564
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Implementing deep neural networks for learning the solution maps of
parametric partial differential equations (PDEs) turns out to be more efficient
than using many conventional numerical methods. However, limited theoretical
analyses have been conducted on this approach. In this study, we investigate
the expressive power of deep rectified quadratic unit (ReQU) neural networks
for approximating the solution maps of parametric PDEs. The proposed approach
is motivated by the recent important work of G. Kutyniok, P. Petersen, M.
Raslan and R. Schneider (Gitta Kutyniok, Philipp Petersen, Mones Raslan, and
Reinhold Schneider. A theoretical analysis of deep neural networks and
parametric pdes. Constructive Approximation, pages 1-53, 2021), which uses deep
rectified linear unit (ReLU) neural networks for solving parametric PDEs. In
contrast to the previously established complexity-bound
$\mathcal{O}\left(d^3\log_{2}^{q}(1/ \epsilon) \right)$ for ReLU neural
networks, we derive an upper bound $\mathcal{O}\left(d^3\log_{2}^{q}\log_{2}(1/
\epsilon) \right)$ on the size of the deep ReQU neural network required to
achieve accuracy $\epsilon>0$, where $d$ is the dimension of reduced basis
representing the solutions. Our method takes full advantage of the inherent
low-dimensionality of the solution manifolds and better approximation
performance of deep ReQU neural networks. Numerical experiments are performed
to verify our theoretical result.
Related papers
- Solving Poisson Equations using Neural Walk-on-Spheres [80.1675792181381]
We propose Neural Walk-on-Spheres (NWoS), a novel neural PDE solver for the efficient solution of high-dimensional Poisson equations.
We demonstrate the superiority of NWoS in accuracy, speed, and computational costs.
arXiv Detail & Related papers (2024-06-05T17:59:22Z) - Global Convergence of Deep Galerkin and PINNs Methods for Solving
Partial Differential Equations [0.0]
A variety of deep learning methods have been developed to try and solve high-dimensional PDEs by approximating the solution using a neural network.
We prove global convergence for one of the commonly-used deep learning algorithms for solving PDEs, the Deep Galerkin MethodDGM.
arXiv Detail & Related papers (2023-05-10T09:20:11Z) - Understanding Deep Neural Function Approximation in Reinforcement
Learning via $\epsilon$-Greedy Exploration [53.90873926758026]
This paper provides a theoretical study of deep neural function approximation in reinforcement learning (RL)
We focus on the value based algorithm with the $epsilon$-greedy exploration via deep (and two-layer) neural networks endowed by Besov (and Barron) function spaces.
Our analysis reformulates the temporal difference error in an $L2(mathrmdmu)$-integrable space over a certain averaged measure $mu$, and transforms it to a generalization problem under the non-iid setting.
arXiv Detail & Related papers (2022-09-15T15:42:47Z) - Neural Basis Functions for Accelerating Solutions to High Mach Euler
Equations [63.8376359764052]
We propose an approach to solving partial differential equations (PDEs) using a set of neural networks.
We regress a set of neural networks onto a reduced order Proper Orthogonal Decomposition (POD) basis.
These networks are then used in combination with a branch network that ingests the parameters of the prescribed PDE to compute a reduced order approximation to the PDE.
arXiv Detail & Related papers (2022-08-02T18:27:13Z) - Nonlocal Kernel Network (NKN): a Stable and Resolution-Independent Deep
Neural Network [23.465930256410722]
Nonlocal kernel network (NKN) is resolution independent, characterized by deep neural networks.
NKN is capable of handling a variety of tasks such as learning governing equations and classifying images.
arXiv Detail & Related papers (2022-01-06T19:19:35Z) - Overparameterization of deep ResNet: zero loss and mean-field analysis [19.45069138853531]
Finding parameters in a deep neural network (NN) that fit data is a non optimization problem.
We show that a basic first-order optimization method (gradient descent) finds a global solution with perfect fit in many practical situations.
We give estimates of the depth and width needed to reduce the loss below a given threshold, with high probability.
arXiv Detail & Related papers (2021-05-30T02:46:09Z) - Parametric Complexity Bounds for Approximating PDEs with Neural Networks [41.46028070204925]
We prove that when a PDE's coefficients are representable by small neural networks, the parameters required to approximate its solution scalely with the input $d$ are proportional to the parameter counts of the neural networks.
Our proof is based on constructing a neural network which simulates gradient descent in an appropriate space which converges to the solution of the PDE.
arXiv Detail & Related papers (2021-03-03T02:42:57Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Two-Layer Neural Networks for Partial Differential Equations:
Optimization and Generalization Theory [4.243322291023028]
We show that the gradient descent method can identify a global minimizer of the least-squares optimization for solving second-order linear PDEs.
We also analyze the generalization error of the least-squares optimization for second-order linear PDEs and two-layer neural networks.
arXiv Detail & Related papers (2020-06-28T22:24:51Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.