Exponential Convergence of Deep Operator Networks for Elliptic Partial
Differential Equations
- URL: http://arxiv.org/abs/2112.08125v1
- Date: Wed, 15 Dec 2021 13:56:28 GMT
- Title: Exponential Convergence of Deep Operator Networks for Elliptic Partial
Differential Equations
- Authors: Carlo Marcati and Christoph Schwab
- Abstract summary: We construct deep operator networks (ONets) between infinite-dimensional spaces that emulate with an exponential rate of convergence the coefficient-to-solution map of elliptic second-order PDEs.
In particular, we consider problems set in $d$-dimensional periodic domains, $d=1, 2, dots$, and with analytic right-hand sides and coefficients.
We prove that the neural networks in the ONet have size $mathcalO(left|log(varepsilon)right|kappa)$ for some $kappa
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We construct deep operator networks (ONets) between infinite-dimensional
spaces that emulate with an exponential rate of convergence the
coefficient-to-solution map of elliptic second-order PDEs. In particular, we
consider problems set in $d$-dimensional periodic domains, $d=1, 2, \dots$, and
with analytic right-hand sides and coefficients. Our analysis covers
diffusion-reaction problems, parametric diffusion equations, and elliptic
systems such as linear isotropic elastostatics in heterogeneous materials.
We leverage the exponential convergence of spectral collocation methods for
boundary value problems whose solutions are analytic. In the present periodic
and analytic setting, this follows from classical elliptic regularity. Within
the ONet branch and trunk construction of [Chen and Chen, 1993] and of [Lu et
al., 2021], we show the existence of deep ONets which emulate the
coefficient-to-solution map to accuracy $\varepsilon>0$ in the $H^1$ norm,
uniformly over the coefficient set. We prove that the neural networks in the
ONet have size $\mathcal{O}(\left|\log(\varepsilon)\right|^\kappa)$ for some
$\kappa>0$ depending on the physical space dimension.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Operator learning for hyperbolic partial differential equations [9.434110429069385]
We construct the first rigorously justified probabilistic algorithm for recovering the solution operator of a hyperbolic partial differential equation (PDE)
The primary challenge of recovering the solution operator of hyperbolic PDEs is the presence of characteristics, along which the associated Green's function is discontinuous.
Our assumptions on the regularity of the coefficients of the hyperbolic PDE are relatively weak given that hyperbolic PDEs do not have the instantaneous smoothing effect'' of elliptic and parabolic PDEs.
arXiv Detail & Related papers (2023-12-29T06:41:50Z) - Correspondence between open bosonic systems and stochastic differential
equations [77.34726150561087]
We show that there can also be an exact correspondence at finite $n$ when the bosonic system is generalized to include interactions with the environment.
A particular system with the form of a discrete nonlinear Schr"odinger equation is analyzed in more detail.
arXiv Detail & Related papers (2023-02-03T19:17:37Z) - Conditions for realizing one-point interactions from a multi-layer
structure model [77.34726150561087]
A heterostructure composed of $N$ parallel homogeneous layers is studied in the limit as their widths shrink to zero.
The problem is investigated in one dimension and the piecewise constant potential in the Schr"odinger equation is given.
arXiv Detail & Related papers (2021-12-15T22:30:39Z) - Mean-Square Analysis with An Application to Optimal Dimension Dependence
of Langevin Monte Carlo [60.785586069299356]
This work provides a general framework for the non-asymotic analysis of sampling error in 2-Wasserstein distance.
Our theoretical analysis is further validated by numerical experiments.
arXiv Detail & Related papers (2021-09-08T18:00:05Z) - Deep neural network approximation of analytic functions [91.3755431537592]
entropy bound for the spaces of neural networks with piecewise linear activation functions.
We derive an oracle inequality for the expected error of the considered penalized deep neural network estimators.
arXiv Detail & Related papers (2021-04-05T18:02:04Z) - A Priori Generalization Analysis of the Deep Ritz Method for Solving
High Dimensional Elliptic Equations [11.974322921837384]
We derive the generalization error bounds of two-layer neural networks in the framework of the Deep Ritz Method (DRM)
We prove that the convergence rates of generalization errors are independent of the dimension $d$.
We develop a new solution theory for the PDEs on the spectral Barron space.
arXiv Detail & Related papers (2021-01-05T18:50:59Z) - Exponential ReLU Neural Network Approximation Rates for Point and Edge
Singularities [0.0]
We prove exponential expressivity with stable ReLU Neural Networks (ReLU NNs) in $H1(Omega)$ for weighted analytic function classes in certain polytopal domains.
The exponential approximation rates are shown to hold in space dimension $d = 2$ on Lipschitz polygons with straight sides, and in space dimension $d=3$ on Fichera-type polyhedral domains with plane faces.
arXiv Detail & Related papers (2020-10-23T07:44:32Z) - Convex Geometry and Duality of Over-parameterized Neural Networks [70.15611146583068]
We develop a convex analytic approach to analyze finite width two-layer ReLU networks.
We show that an optimal solution to the regularized training problem can be characterized as extreme points of a convex set.
In higher dimensions, we show that the training problem can be cast as a finite dimensional convex problem with infinitely many constraints.
arXiv Detail & Related papers (2020-02-25T23:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.