Approximate Bayesian Neural Operators: Uncertainty Quantification for
Parametric PDEs
- URL: http://arxiv.org/abs/2208.01565v1
- Date: Tue, 2 Aug 2022 16:10:27 GMT
- Title: Approximate Bayesian Neural Operators: Uncertainty Quantification for
Parametric PDEs
- Authors: Emilia Magnani, Nicholas Kr\"amer, Runa Eschenhagen, Lorenzo Rosasco,
Philipp Hennig
- Abstract summary: We provide a mathematically detailed Bayesian formulation of the ''shallow'' (linear) version of neural operators.
We then extend this analytic treatment to general deep neural operators using approximate methods from Bayesian deep learning.
As a result, our approach is able to identify cases, and provide structured uncertainty estimates, where the neural operator fails to predict well.
- Score: 34.179984253109346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Neural operators are a type of deep architecture that learns to solve (i.e.
learns the nonlinear solution operator of) partial differential equations
(PDEs). The current state of the art for these models does not provide explicit
uncertainty quantification. This is arguably even more of a problem for this
kind of tasks than elsewhere in machine learning, because the dynamical systems
typically described by PDEs often exhibit subtle, multiscale structure that
makes errors hard to spot by humans. In this work, we first provide a
mathematically detailed Bayesian formulation of the ''shallow'' (linear)
version of neural operators in the formalism of Gaussian processes. We then
extend this analytic treatment to general deep neural operators using
approximate methods from Bayesian deep learning. We extend previous results on
neural operators by providing them with uncertainty quantification. As a
result, our approach is able to identify cases, and provide structured
uncertainty estimates, where the neural operator fails to predict well.
Related papers
- DimOL: Dimensional Awareness as A New 'Dimension' in Operator Learning [63.5925701087252]
We introduce DimOL (Dimension-aware Operator Learning), drawing insights from dimensional analysis.
To implement DimOL, we propose the ProdLayer, which can be seamlessly integrated into FNO-based and Transformer-based PDE solvers.
Empirically, DimOL models achieve up to 48% performance gain within the PDE datasets.
arXiv Detail & Related papers (2024-10-08T10:48:50Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Diffusion models as probabilistic neural operators for recovering unobserved states of dynamical systems [49.2319247825857]
We show that diffusion-based generative models exhibit many properties favourable for neural operators.
We propose to train a single model adaptable to multiple tasks, by alternating between the tasks during training.
arXiv Detail & Related papers (2024-05-11T21:23:55Z) - Neural Operator induced Gaussian Process framework for probabilistic solution of parametric partial differential equations [8.528817025440746]
We propose a novel Neural Operator-induced Gaussian Process (NOGaP) for partial differential equations.
The proposed framework leads to improved prediction accuracy and offers a quantifiable measure of uncertainty.
The results demonstrate superior accuracy and expected uncertainty characteristics, suggesting the promising potential of NOGaP.
arXiv Detail & Related papers (2024-04-24T03:16:48Z) - Operator Learning: Algorithms and Analysis [8.305111048568737]
Operator learning refers to the application of ideas from machine learning to approximate operators mapping between Banach spaces of functions.
This review focuses on neural operators, built on the success of deep neural networks in the approximation of functions defined on finite dimensional Euclidean spaces.
arXiv Detail & Related papers (2024-02-24T04:40:27Z) - Residual-based error correction for neural operator accelerated
infinite-dimensional Bayesian inverse problems [3.2548794659022393]
We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems.
We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error.
We demonstrate that posterior representations of two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction.
arXiv Detail & Related papers (2022-10-06T15:57:22Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Fourier Neural Operator for Parametric Partial Differential Equations [57.90284928158383]
We formulate a new neural operator by parameterizing the integral kernel directly in Fourier space.
We perform experiments on Burgers' equation, Darcy flow, and Navier-Stokes equation.
It is up to three orders of magnitude faster compared to traditional PDE solvers.
arXiv Detail & Related papers (2020-10-18T00:34:21Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Multipole Graph Neural Operator for Parametric Partial Differential
Equations [57.90284928158383]
One of the main challenges in using deep learning-based methods for simulating physical systems is formulating physics-based data.
We propose a novel multi-level graph neural network framework that captures interaction at all ranges with only linear complexity.
Experiments confirm our multi-graph network learns discretization-invariant solution operators to PDEs and can be evaluated in linear time.
arXiv Detail & Related papers (2020-06-16T21:56:22Z) - Bayesian Hidden Physics Models: Uncertainty Quantification for Discovery
of Nonlinear Partial Differential Operators from Data [0.0]
There has been a surge of interest in using machine learning models to discover physical laws such as differential equations from data.
We introduce a novel model comprising "leaf modules" that learn to govern functional data as neural networks.
Our approach quantifies the reliability of the learned physics in terms of a posterior distribution over operators and propagates this uncertainty to solutions of novel initial-boundary value problem instances.
arXiv Detail & Related papers (2020-06-07T18:48:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.