Deep Operator Learning Lessens the Curse of Dimensionality for PDEs
- URL: http://arxiv.org/abs/2301.12227v3
- Date: Tue, 3 Oct 2023 14:58:20 GMT
- Title: Deep Operator Learning Lessens the Curse of Dimensionality for PDEs
- Authors: Ke Chen, Chunmei Wang and Haizhao Yang
- Abstract summary: This paper provides an estimate for the generalization error of learning Lipschitz operators over Banach spaces using DNNs with applications to various PDE solution operators.
Under mild assumptions on data distributions or operator structures, our analysis shows that deep operator learning can have a relaxed dependence on the discretization resolution of PDEs.
- Score: 11.181533339111853
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have achieved remarkable success in numerous
domains, and their application to PDE-related problems has been rapidly
advancing. This paper provides an estimate for the generalization error of
learning Lipschitz operators over Banach spaces using DNNs with applications to
various PDE solution operators. The goal is to specify DNN width, depth, and
the number of training samples needed to guarantee a certain testing error.
Under mild assumptions on data distributions or operator structures, our
analysis shows that deep operator learning can have a relaxed dependence on the
discretization resolution of PDEs and, hence, lessen the curse of
dimensionality in many PDE-related problems including elliptic equations,
parabolic equations, and Burgers equations. Our results are also applied to
give insights about discretization-invariance in operator learning.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - DeepONet for Solving Nonlinear Partial Differential Equations with Physics-Informed Training [2.44755919161855]
We investigate the use of operator learning, specifically DeepONet, for solving nonlinear partial differential equations (PDEs)
This study examines the performance of DeepONet in physics-informed training, focusing on two key aspects: (1) the approximation capabilities of deep branch and trunk networks, and (2) the generalization error in Sobolev norms.
arXiv Detail & Related papers (2024-10-06T03:43:56Z) - Physics-informed Discretization-independent Deep Compositional Operator Network [1.2430809884830318]
We introduce a novel physics-informed model architecture which can generalize to various discrete representations of PDE parameters and irregular domain shapes.
Inspired by deep operator neural networks, our model involves a discretization-independent learning of parameter embedding repeatedly.
Numerical results demonstrate the accuracy and efficiency of the proposed method.
arXiv Detail & Related papers (2024-04-21T12:41:30Z) - Pretraining Codomain Attention Neural Operators for Solving Multiphysics PDEs [85.40198664108624]
We propose Codomain Attention Neural Operator (CoDA-NO) to solve multiphysics problems with PDEs.
CoDA-NO tokenizes functions along the codomain or channel space, enabling self-supervised learning or pretraining of multiple PDE systems.
We find CoDA-NO to outperform existing methods by over 36% on complex downstream tasks with limited data.
arXiv Detail & Related papers (2024-03-19T08:56:20Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - A physics-informed neural network framework for modeling obstacle-related equations [3.687313790402688]
Physics-informed neural networks (PINNs) are an attractive tool for solving partial differential equations based on sparse and noisy data.
Here we extend PINNs to solve obstacle-related PDEs which present a great computational challenge.
The performance of the proposed PINNs is demonstrated in multiple scenarios for linear and nonlinear PDEs subject to regular and irregular obstacles.
arXiv Detail & Related papers (2023-04-07T09:22:28Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Error analysis for physics informed neural networks (PINNs)
approximating Kolmogorov PDEs [0.0]
We derive rigorous bounds on the error incurred by PINNs in approximating the solutions of a large class of parabolic PDEs.
We construct neural networks, whose PINN residual (generalization error) can be made as small as desired.
These results enable us to provide a comprehensive error analysis for PINNs in approximating Kolmogorov PDEs.
arXiv Detail & Related papers (2021-06-28T08:37:56Z) - Adversarial Multi-task Learning Enhanced Physics-informed Neural
Networks for Solving Partial Differential Equations [9.823102211212582]
We introduce the novel approach of employing multi-task learning techniques, the uncertainty-weighting loss and the gradients surgery, in the context of learning PDE solutions.
In the experiments, our proposed methods are found to be effective and reduce the error on the unseen data points as compared to the previous approaches.
arXiv Detail & Related papers (2021-04-29T13:17:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.