Elliptic PDE learning is provably data-efficient
- URL: http://arxiv.org/abs/2302.12888v2
- Date: Tue, 19 Sep 2023 09:35:41 GMT
- Title: Elliptic PDE learning is provably data-efficient
- Authors: Nicolas Boull\'e, Diana Halikias, Alex Townsend
- Abstract summary: PDE learning combines physics and machine learning to recover unknown physical systems from experimental data.
Our work provides theoretical guarantees on the number of input-output training pairs required in PDE learning.
Specifically, we exploit randomized numerical linear algebra and PDE theory to derive a provably data-efficient algorithm that recovers solution operators of 3D uniformly elliptic PDEs from input-output data.
- Score: 7.097838977449412
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: PDE learning is an emerging field that combines physics and machine learning
to recover unknown physical systems from experimental data. While deep learning
models traditionally require copious amounts of training data, recent PDE
learning techniques achieve spectacular results with limited data availability.
Still, these results are empirical. Our work provides theoretical guarantees on
the number of input-output training pairs required in PDE learning.
Specifically, we exploit randomized numerical linear algebra and PDE theory to
derive a provably data-efficient algorithm that recovers solution operators of
3D uniformly elliptic PDEs from input-output data and achieves an exponential
convergence rate of the error with respect to the size of the training dataset
with an exceptionally high probability of success.
Related papers
- DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - DPOT: Auto-Regressive Denoising Operator Transformer for Large-Scale PDE Pre-Training [87.90342423839876]
We present a new auto-regressive denoising pre-training strategy, which allows for more stable and efficient pre-training on PDE data.
We train our PDE foundation model with up to 0.5B parameters on 10+ PDE datasets with more than 100k trajectories.
arXiv Detail & Related papers (2024-03-06T08:38:34Z) - Data-Efficient Operator Learning via Unsupervised Pretraining and In-Context Learning [45.78096783448304]
In this work, seeking data efficiency, we design unsupervised pretraining for PDE operator learning.
We mine unlabeled PDE data without simulated solutions, and we pretrain neural operators with physics-inspired reconstruction-based proxy tasks.
Our method is highly data-efficient, more generalizable, and even outperforms conventional vision-pretrained models.
arXiv Detail & Related papers (2024-02-24T06:27:33Z) - Physics-constrained robust learning of open-form partial differential equations from limited and noisy data [1.50528618730365]
This study proposes a framework to robustly uncover open-form partial differential equations (PDEs) from limited and noisy data.
A neural network-based predictive model fits the system response and serves as the reward evaluator for the generated PDEs.
Numerical experiments demonstrate our framework's capability to uncover governing equations from nonlinear dynamic systems with limited and highly noisy data.
arXiv Detail & Related papers (2023-09-14T12:34:42Z) - Self-Supervised Learning with Lie Symmetries for Partial Differential
Equations [25.584036829191902]
We learn general-purpose representations of PDEs by implementing joint embedding methods for self-supervised learning (SSL)
Our representation outperforms baseline approaches to invariant tasks, such as regressing the coefficients of a PDE, while also improving the time-stepping performance of neural solvers.
We hope that our proposed methodology will prove useful in the eventual development of general-purpose foundation models for PDEs.
arXiv Detail & Related papers (2023-07-11T16:52:22Z) - Training Deep Surrogate Models with Large Scale Online Learning [48.7576911714538]
Deep learning algorithms have emerged as a viable alternative for obtaining fast solutions for PDEs.
Models are usually trained on synthetic data generated by solvers, stored on disk and read back for training.
It proposes an open source online training framework for deep surrogate models.
arXiv Detail & Related papers (2023-06-28T12:02:27Z) - Mixed Effects Neural ODE: A Variational Approximation for Analyzing the
Dynamics of Panel Data [50.23363975709122]
We propose a probabilistic model called ME-NODE to incorporate (fixed + random) mixed effects for analyzing panel data.
We show that our model can be derived using smooth approximations of SDEs provided by the Wong-Zakai theorem.
We then derive Evidence Based Lower Bounds for ME-NODE, and develop (efficient) training algorithms.
arXiv Detail & Related papers (2022-02-18T22:41:51Z) - Discovering Nonlinear PDEs from Scarce Data with Physics-encoded
Learning [11.641708412097659]
We propose a physics-encoded discrete learning framework for discovering PDEs from noisy and scarce data.
We validate our method on three nonlinear PDE systems.
arXiv Detail & Related papers (2022-01-28T07:49:48Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.