Solving and Learning Nonlinear PDEs with Gaussian Processes
- URL: http://arxiv.org/abs/2103.12959v1
- Date: Wed, 24 Mar 2021 03:16:08 GMT
- Title: Solving and Learning Nonlinear PDEs with Gaussian Processes
- Authors: Yifan Chen and Bamdad Hosseini and Houman Owhadi and Andrew M Stuart
- Abstract summary: We introduce a simple, rigorous, and unified framework for solving nonlinear partial differential equations.
The proposed approach provides a natural generalization of collocation kernel methods to nonlinear PDEs and IPs.
For IPs, while the traditional approach has been to iterate between the identifications of parameters in the PDE and the numerical approximation of its solution, our algorithm tackles both simultaneously.
- Score: 11.09729362243947
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a simple, rigorous, and unified framework for solving nonlinear
partial differential equations (PDEs), and for solving inverse problems (IPs)
involving the identification of parameters in PDEs, using the framework of
Gaussian processes. The proposed approach (1) provides a natural generalization
of collocation kernel methods to nonlinear PDEs and IPs, (2) has guaranteed
convergence with a path to compute error bounds in the PDE setting, and (3)
inherits the state-of-the-art computational complexity of linear solvers for
dense kernel matrices. The main idea of our method is to approximate the
solution of a given PDE with a MAP estimator of a Gaussian process given the
observation of the PDE at a finite number of collocation points. Although this
optimization problem is infinite-dimensional, it can be reduced to a
finite-dimensional one by introducing additional variables corresponding to the
values of the derivatives of the solution at collocation points; this
generalizes the representer theorem arising in Gaussian process regression. The
reduced optimization problem has a quadratic loss and nonlinear constraints,
and it is in turn solved with a variant of the Gauss-Newton method. The
resulting algorithm (a) can be interpreted as solving successive linearizations
of the nonlinear PDE, and (b) is found in practice to converge in a small
number (two to ten) of iterations in experiments conducted on a range of PDEs.
For IPs, while the traditional approach has been to iterate between the
identifications of parameters in the PDE and the numerical approximation of its
solution, our algorithm tackles both simultaneously. Experiments on nonlinear
elliptic PDEs, Burgers' equation, a regularized Eikonal equation, and an IP for
permeability identification in Darcy flow illustrate the efficacy and scope of
our framework.
Related papers
- Total Uncertainty Quantification in Inverse PDE Solutions Obtained with Reduced-Order Deep Learning Surrogate Models [50.90868087591973]
We propose an approximate Bayesian method for quantifying the total uncertainty in inverse PDE solutions obtained with machine learning surrogate models.
We test the proposed framework by comparing it with the iterative ensemble smoother and deep ensembling methods for a non-linear diffusion equation.
arXiv Detail & Related papers (2024-08-20T19:06:02Z) - Unisolver: PDE-Conditional Transformers Are Universal PDE Solvers [55.0876373185983]
We present the Universal PDE solver (Unisolver) capable of solving a wide scope of PDEs.
Our key finding is that a PDE solution is fundamentally under the control of a series of PDE components.
Unisolver achieves consistent state-of-the-art results on three challenging large-scale benchmarks.
arXiv Detail & Related papers (2024-05-27T15:34:35Z) - RoPINN: Region Optimized Physics-Informed Neural Networks [66.38369833561039]
Physics-informed neural networks (PINNs) have been widely applied to solve partial differential equations (PDEs)
This paper proposes and theoretically studies a new training paradigm as region optimization.
A practical training algorithm, Region Optimized PINN (RoPINN), is seamlessly derived from this new paradigm.
arXiv Detail & Related papers (2024-05-23T09:45:57Z) - Approximation of Solution Operators for High-dimensional PDEs [2.3076986663832044]
We propose a finite-dimensional control-based method to approximate solution operators for evolutional partial differential equations.
Results are presented for several high-dimensional PDEs, including real-world applications to solving Hamilton-Jacobi-Bellman equations.
arXiv Detail & Related papers (2024-01-18T21:45:09Z) - Sparse Cholesky Factorization for Solving Nonlinear PDEs via Gaussian
Processes [3.750429354590631]
We present a sparse Cholesky factorization algorithm for dense kernel matrices.
We numerically illustrate our algorithm's near-linear space/time complexity for a broad class of nonlinear PDEs.
arXiv Detail & Related papers (2023-04-03T18:35:28Z) - Deep learning approximations for non-local nonlinear PDEs with Neumann
boundary conditions [2.449909275410288]
We propose two numerical methods based on machine learning and on Picard iterations, respectively, to approximately solve non-local nonlinear PDEs.
We evaluate the performance of the two methods on five different PDEs arising in physics and biology.
arXiv Detail & Related papers (2022-05-07T15:47:17Z) - Lie Point Symmetry Data Augmentation for Neural PDE Solvers [69.72427135610106]
We present a method, which can partially alleviate this problem, by improving neural PDE solver sample complexity.
In the context of PDEs, it turns out that we are able to quantitatively derive an exhaustive list of data transformations.
We show how it can easily be deployed to improve neural PDE solver sample complexity by an order of magnitude.
arXiv Detail & Related papers (2022-02-15T18:43:17Z) - Semi-Implicit Neural Solver for Time-dependent Partial Differential
Equations [4.246966726709308]
We propose a neural solver to learn an optimal iterative scheme in a data-driven fashion for any class of PDEs.
We provide theoretical guarantees for the correctness and convergence of neural solvers analogous to conventional iterative solvers.
arXiv Detail & Related papers (2021-09-03T12:03:10Z) - Bayesian Numerical Methods for Nonlinear Partial Differential Equations [4.996064986640264]
nonlinear partial differential equations (PDEs) pose substantial challenges from an inferential perspective.
This paper extends earlier work on linear PDEs to a general class of initial value problems specified by nonlinear PDEs.
A suitable prior model for the solution of the PDE is identified using novel theoretical analysis of the sample path properties of Mat'ern processes.
arXiv Detail & Related papers (2021-04-22T14:02:10Z) - dNNsolve: an efficient NN-based PDE solver [62.997667081978825]
We introduce dNNsolve, that makes use of dual Neural Networks to solve ODEs/PDEs.
We show that dNNsolve is capable of solving a broad range of ODEs/PDEs in 1, 2 and 3 spacetime dimensions.
arXiv Detail & Related papers (2021-03-15T19:14:41Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.