Deep Equilibrium Based Neural Operators for Steady-State PDEs
- URL: http://arxiv.org/abs/2312.00234v1
- Date: Thu, 30 Nov 2023 22:34:57 GMT
- Title: Deep Equilibrium Based Neural Operators for Steady-State PDEs
- Authors: Tanya Marwah, Ashwini Pokle, J. Zico Kolter, Zachary C. Lipton,
Jianfeng Lu, Andrej Risteski
- Abstract summary: We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
- Score: 100.88355782126098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data-driven machine learning approaches are being increasingly used to solve
partial differential equations (PDEs). They have shown particularly striking
successes when training an operator, which takes as input a PDE in some family,
and outputs its solution. However, the architectural design space, especially
given structural knowledge of the PDE family of interest, is still poorly
understood. We seek to remedy this gap by studying the benefits of weight-tied
neural network architectures for steady-state PDEs. To achieve this, we first
demonstrate that the solution of most steady-state PDEs can be expressed as a
fixed point of a non-linear operator. Motivated by this observation, we propose
FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly
solves for the solution of a steady-state PDE as the infinite-depth fixed point
of an implicit operator layer using a black-box root solver and differentiates
analytically through this fixed point resulting in $\mathcal{O}(1)$ training
memory. Our experiments indicate that FNO-DEQ-based architectures outperform
FNO-based baselines with $4\times$ the number of parameters in predicting the
solution to steady-state PDEs such as Darcy Flow and steady-state
incompressible Navier-Stokes. Finally, we show FNO-DEQ is more robust when
trained with datasets with more noisy observations than the FNO-based
baselines, demonstrating the benefits of using appropriate inductive biases in
architectural design for different neural network based PDE solvers. Further,
we show a universal approximation result that demonstrates that FNO-DEQ can
approximate the solution to any steady-state PDE that can be written as a fixed
point equation.
Related papers
- Physics-informed Neural Networks for Functional Differential Equations: Cylindrical Approximation and Its Convergence Guarantees [7.366405857677226]
We propose the first learning scheme for functional differential equations (FDEs)
FDEs play a fundamental role in physics, mathematics, and optimal control.
numerical approximations of FDEs have been developed, but they often oversimplify the solutions.
arXiv Detail & Related papers (2024-10-23T06:16:35Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Koopman neural operator as a mesh-free solver of non-linear partial differential equations [15.410070455154138]
We propose the Koopman neural operator (KNO), a new neural operator, to overcome these challenges.
By approximating the Koopman operator, an infinite-dimensional operator governing all possible observations of the dynamic system, we can equivalently learn the solution of a non-linear PDE family.
The KNO exhibits notable advantages compared with previous state-of-the-art models.
arXiv Detail & Related papers (2023-01-24T14:10:15Z) - Learning differentiable solvers for systems with hard constraints [48.54197776363251]
We introduce a practical method to enforce partial differential equation (PDE) constraints for functions defined by neural networks (NNs)
We develop a differentiable PDE-constrained layer that can be incorporated into any NN architecture.
Our results show that incorporating hard constraints directly into the NN architecture achieves much lower test error when compared to training on an unconstrained objective.
arXiv Detail & Related papers (2022-07-18T15:11:43Z) - Noise-aware Physics-informed Machine Learning for Robust PDE Discovery [5.746505534720594]
This work is concerned with discovering the governing partial differential equation (PDE) of a physical system.
Existing methods have demonstrated the PDE identification from finite observations but failed to maintain satisfying performance against noisy data.
We introduce a noise-aware physics-informed machine learning framework to discover the governing PDE from data following arbitrary distributions.
arXiv Detail & Related papers (2022-06-26T15:29:07Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Bayesian neural networks for weak solution of PDEs with uncertainty
quantification [3.4773470589069473]
A new physics-constrained neural network (NN) approach is proposed to solve PDEs without labels.
We write the loss function of NNs based on the discretized residual of PDEs through an efficient, convolutional operator-based, and vectorized implementation.
We demonstrate the capability and performance of the proposed framework by applying it to steady-state diffusion, linear elasticity, and nonlinear elasticity.
arXiv Detail & Related papers (2021-01-13T04:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.