Nonlinear Reconstruction for Operator Learning of PDEs with
Discontinuities
- URL: http://arxiv.org/abs/2210.01074v1
- Date: Mon, 3 Oct 2022 16:47:56 GMT
- Title: Nonlinear Reconstruction for Operator Learning of PDEs with
Discontinuities
- Authors: Samuel Lanthaler and Roberto Molinaro and Patrik Hadorn and Siddhartha
Mishra
- Abstract summary: A large class of hyperbolic and advection-dominated PDEs can have solutions with discontinuities.
We rigorously prove, in terms of lower approximation bounds, that methods which entail a linear reconstruction step fail to efficiently approximate the solution operator of such PDEs.
We show that certain methods employing a non-linear reconstruction mechanism can overcome these fundamental lower bounds and approximate the underlying operator efficiently.
- Score: 5.735035463793008
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A large class of hyperbolic and advection-dominated PDEs can have solutions
with discontinuities. This paper investigates, both theoretically and
empirically, the operator learning of PDEs with discontinuous solutions. We
rigorously prove, in terms of lower approximation bounds, that methods which
entail a linear reconstruction step (e.g. DeepONet or PCA-Net) fail to
efficiently approximate the solution operator of such PDEs. In contrast, we
show that certain methods employing a non-linear reconstruction mechanism can
overcome these fundamental lower bounds and approximate the underlying operator
efficiently. The latter class includes Fourier Neural Operators and a novel
extension of DeepONet termed shift-DeepONet. Our theoretical findings are
confirmed by empirical results for advection equation, inviscid Burgers'
equation and compressible Euler equations of aerodynamics.
Related papers
- Quantitative Approximation for Neural Operators in Nonlinear Parabolic Equations [0.40964539027092917]
We derive the approximation rate of solution operators for the nonlinear parabolic partial differential equations (PDEs)
Our results show that neural operators can efficiently approximate these solution operators without the exponential growth in model complexity.
A key insight in our proof is to transfer PDEs into the corresponding integral equations via Duahamel's principle, and to leverage the similarity between neural operators and Picard's iteration.
arXiv Detail & Related papers (2024-10-03T02:28:17Z) - Structure-preserving learning for multi-symplectic PDEs [8.540823673172403]
This paper presents an energy-preserving machine learning method for inferring reduced-order models (ROMs) by exploiting the multi-symplectic form of partial differential equations (PDEs)
We prove that the proposed method satisfies spatially discrete local energy conservation and preserves the multi-symplectic conservation laws.
arXiv Detail & Related papers (2024-09-16T16:07:21Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Deep Learning of Delay-Compensated Backstepping for Reaction-Diffusion
PDEs [2.2869182375774613]
Multiple operators arise in the control of PDE systems from distinct PDE classes.
DeepONet-approximated nonlinear operator is a cascade/composition of the operators defined by one hyperbolic PDE of the Goursat form and one parabolic PDE on a rectangle.
For the delay-compensated PDE backstepping controller, we guarantee exponential stability in the $L2$ norm of the plant state and the $H1$ norm of the input delay state.
arXiv Detail & Related papers (2023-08-21T06:42:33Z) - Learning Discretized Neural Networks under Ricci Flow [51.36292559262042]
We study Discretized Neural Networks (DNNs) composed of low-precision weights and activations.
DNNs suffer from either infinite or zero gradients due to the non-differentiable discrete function during training.
arXiv Detail & Related papers (2023-02-07T10:51:53Z) - Solving High-Dimensional PDEs with Latent Spectral Models [74.1011309005488]
We present Latent Spectral Models (LSM) toward an efficient and precise solver for high-dimensional PDEs.
Inspired by classical spectral methods in numerical analysis, we design a neural spectral block to solve PDEs in the latent space.
LSM achieves consistent state-of-the-art and yields a relative gain of 11.5% averaged on seven benchmarks.
arXiv Detail & Related papers (2023-01-30T04:58:40Z) - Koopman neural operator as a mesh-free solver of non-linear partial differential equations [15.410070455154138]
We propose the Koopman neural operator (KNO), a new neural operator, to overcome these challenges.
By approximating the Koopman operator, an infinite-dimensional operator governing all possible observations of the dynamic system, we can equivalently learn the solution of a non-linear PDE family.
The KNO exhibits notable advantages compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-01-24T14:10:15Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Solving and Learning Nonlinear PDEs with Gaussian Processes [11.09729362243947]
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations.
The proposed approach provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs.
For IPs, while the traditional approach has been to iterate between the identifications of parameters in the PDE and the numerical approximation of its solution, our algorithm tackles both simultaneously.
arXiv Detail & Related papers (2021-03-24T03:16:08Z) - Learning Fast Approximations of Sparse Nonlinear Regression [50.00693981886832]
In this work, we bridge the gap by introducing the Threshold Learned Iterative Shrinkage Algorithming (NLISTA)
Experiments on synthetic data corroborate our theoretical results and show our method outperforms state-of-the-art methods.
arXiv Detail & Related papers (2020-10-26T11:31:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.