Error Analysis of Kernel/GP Methods for Nonlinear and Parametric PDEs
- URL: http://arxiv.org/abs/2305.04962v1
- Date: Mon, 8 May 2023 18:00:33 GMT
- Title: Error Analysis of Kernel/GP Methods for Nonlinear and Parametric PDEs
- Authors: Pau Batlle, Yifan Chen, Bamdad Hosseini, Houman Owhadi, Andrew M
Stuart
- Abstract summary: We introduce a priori Sobolev-space error estimates for the solution of nonlinear, and possibly parametric, PDEs.
The proof is articulated around Sobolev norm error estimates for kernel interpolants.
The error estimates demonstrate dimension-benign convergence rates if the solution space of the PDE is smooth enough.
- Score: 16.089904161628258
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a priori Sobolev-space error estimates for the solution of
nonlinear, and possibly parametric, PDEs using Gaussian process and kernel
based methods. The primary assumptions are: (1) a continuous embedding of the
reproducing kernel Hilbert space of the kernel into a Sobolev space of
sufficient regularity; and (2) the stability of the differential operator and
the solution map of the PDE between corresponding Sobolev spaces. The proof is
articulated around Sobolev norm error estimates for kernel interpolants and
relies on the minimizing norm property of the solution. The error estimates
demonstrate dimension-benign convergence rates if the solution space of the PDE
is smooth enough. We illustrate these points with applications to
high-dimensional nonlinear elliptic PDEs and parametric PDEs. Although some
recent machine learning methods have been presented as breaking the curse of
dimensionality in solving high-dimensional PDEs, our analysis suggests a more
nuanced picture: there is a trade-off between the regularity of the solution
and the presence of the curse of dimensionality. Therefore, our results are in
line with the understanding that the curse is absent when the solution is
regular enough.
Related papers
- Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - Approximation of Solution Operators for High-dimensional PDEs [2.3076986663832044]
We propose a finite-dimensional control-based method to approximate solution operators for evolutional partial differential equations.
Results are presented for several high-dimensional PDEs, including real-world applications to solving Hamilton-Jacobi-Bellman equations.
arXiv Detail & Related papers (2024-01-18T21:45:09Z) - Randomized Physics-Informed Machine Learning for Uncertainty
Quantification in High-Dimensional Inverse Problems [49.1574468325115]
We propose a physics-informed machine learning method for uncertainty quantification in high-dimensional inverse problems.
We show analytically and through comparison with Hamiltonian Monte Carlo that the rPICKLE posterior converges to the true posterior given by the Bayes rule.
arXiv Detail & Related papers (2023-12-11T07:33:16Z) - Deep Equilibrium Based Neural Operators for Steady-State PDEs [100.88355782126098]
We study the benefits of weight-tied neural network architectures for steady-state PDEs.
We propose FNO-DEQ, a deep equilibrium variant of the FNO architecture that directly solves for the solution of a steady-state PDE.
arXiv Detail & Related papers (2023-11-30T22:34:57Z) - Learning Partial Differential Equations by Spectral Approximates of
General Sobolev Spaces [0.45880283710344055]
We introduce a novel spectral, finite-dimensional approximation of general Sobolev spaces in terms of Chebyshevs.
We find a variational formulation, solving a vast class of linear and non-linear partial differential equations.
In contrast to PINNs, the PSMs result in a convex optimisation problem for a vast class of PDEs, including all linear ones.
arXiv Detail & Related papers (2023-01-12T09:04:32Z) - Deep learning approximations for non-local nonlinear PDEs with Neumann
boundary conditions [2.449909275410288]
We propose two numerical methods based on machine learning and on Picard iterations, respectively, to approximately solve non-local nonlinear PDEs.
We evaluate the performance of the two methods on five different PDEs arising in physics and biology.
arXiv Detail & Related papers (2022-05-07T15:47:17Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Solving and Learning Nonlinear PDEs with Gaussian Processes [11.09729362243947]
We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations.
The proposed approach provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs.
For IPs, while the traditional approach has been to iterate between the identifications of parameters in the PDE and the numerical approximation of its solution, our algorithm tackles both simultaneously.
arXiv Detail & Related papers (2021-03-24T03:16:08Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Error bounds for PDE-regularized learning [0.6445605125467573]
We consider the regularization of a supervised learning problem by partial differential equations (PDEs)
We derive error bounds for the obtained approximation in terms of a PDE error term and a data error term.
arXiv Detail & Related papers (2020-03-14T00:51:39Z) - GradientDICE: Rethinking Generalized Offline Estimation of Stationary
Values [75.17074235764757]
We present GradientDICE for estimating the density ratio between the state distribution of the target policy and the sampling distribution.
GenDICE is the state-of-the-art for estimating such density ratios.
arXiv Detail & Related papers (2020-01-29T22:10:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.