Residual-based error correction for neural operator accelerated
infinite-dimensional Bayesian inverse problems
- URL: http://arxiv.org/abs/2210.03008v1
- Date: Thu, 6 Oct 2022 15:57:22 GMT
- Title: Residual-based error correction for neural operator accelerated
infinite-dimensional Bayesian inverse problems
- Authors: Lianghao Cao, Thomas O'Leary-Roseberry, Prashant K. Jha, J. Tinsley
Oden, Omar Ghattas
- Abstract summary: We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems.
We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error.
We demonstrate that posterior representations of two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction.
- Score: 3.2548794659022393
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore using neural operators, or neural network representations of
nonlinear maps between function spaces, to accelerate infinite-dimensional
Bayesian inverse problems (BIPs) with models governed by nonlinear parametric
partial differential equations (PDEs). Neural operators have gained significant
attention in recent years for their ability to approximate the
parameter-to-solution maps defined by PDEs using as training data solutions of
PDEs at a limited number of parameter samples. The computational cost of BIPs
can be drastically reduced if the large number of PDE solves required for
posterior characterization are replaced with evaluations of trained neural
operators. However, reducing error in the resulting BIP solutions via reducing
the approximation error of the neural operators in training can be challenging
and unreliable. We provide an a priori error bound result that implies certain
BIPs can be ill-conditioned to the approximation error of neural operators,
thus leading to inaccessible accuracy requirements in training. To reliably
deploy neural operators in BIPs, we consider a strategy for enhancing the
performance of neural operators, which is to correct the prediction of a
trained neural operator by solving a linear variational problem based on the
PDE residual. We show that a trained neural operator with error correction can
achieve a quadratic reduction of its approximation error, all while retaining
substantial computational speedups of posterior sampling when models are
governed by highly nonlinear PDEs. The strategy is applied to two numerical
examples of BIPs based on a nonlinear reaction--diffusion problem and
deformation of hyperelastic materials. We demonstrate that posterior
representations of the two BIPs produced using trained neural operators are
greatly and consistently enhanced by error correction.
Related papers
- DeltaPhi: Learning Physical Trajectory Residual for PDE Solving [54.13671100638092]
We propose and formulate the Physical Trajectory Residual Learning (DeltaPhi)
We learn the surrogate model for the residual operator mapping based on existing neural operator networks.
We conclude that, compared to direct learning, physical residual learning is preferred for PDE solving.
arXiv Detail & Related papers (2024-06-14T07:45:07Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Derivative-enhanced Deep Operator Network [3.169190797722534]
derivative-enhanced deep operator network (DE-DeepONet)
System incorporates linear dimension reduction of high dimensional parameter input into DeepONet to reduce training cost.
derivative loss can be extended to enhance other neural operators, such as the Fourier neural operator (FNO)
arXiv Detail & Related papers (2024-02-29T15:18:37Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Residual-Based Error Corrector Operator to Enhance Accuracy and
Reliability of Neural Operator Surrogates of Nonlinear Variational
Boundary-Value Problems [0.0]
This work focuses on developing methods for approximating the solution operators of a class of parametric partial differential equations via neural operators.
The unpredictability of the accuracy of neural operators impacts their applications in downstream problems of inference, optimization, and control.
arXiv Detail & Related papers (2023-06-21T06:30:56Z) - Variational operator learning: A unified paradigm marrying training
neural operators and solving partial differential equations [9.148052787201797]
We propose a novel paradigm that provides a unified framework of training neural operators and solving PDEs with the variational form.
With a label-free training set and a 5-label-only shift set, VOL learns solution operators with its test errors decreasing in a power law with respect to the amount of unlabeled data.
arXiv Detail & Related papers (2023-04-09T13:20:19Z) - Implicit Stochastic Gradient Descent for Training Physics-informed
Neural Networks [51.92362217307946]
Physics-informed neural networks (PINNs) have effectively been demonstrated in solving forward and inverse differential equation problems.
PINNs are trapped in training failures when the target functions to be approximated exhibit high-frequency or multi-scale features.
In this paper, we propose to employ implicit gradient descent (ISGD) method to train PINNs for improving the stability of training process.
arXiv Detail & Related papers (2023-03-03T08:17:47Z) - Approximate Bayesian Neural Operators: Uncertainty Quantification for
Parametric PDEs [34.179984253109346]
We provide a mathematically detailed Bayesian formulation of the ''shallow'' (linear) version of neural operators.
We then extend this analytic treatment to general deep neural operators using approximate methods from Bayesian deep learning.
As a result, our approach is able to identify cases, and provide structured uncertainty estimates, where the neural operator fails to predict well.
arXiv Detail & Related papers (2022-08-02T16:10:27Z) - Learning Physics-Informed Neural Networks without Stacked
Back-propagation [82.26566759276105]
We develop a novel approach that can significantly accelerate the training of Physics-Informed Neural Networks.
In particular, we parameterize the PDE solution by the Gaussian smoothed model and show that, derived from Stein's Identity, the second-order derivatives can be efficiently calculated without back-propagation.
Experimental results show that our proposed method can achieve competitive error compared to standard PINN training but is two orders of magnitude faster.
arXiv Detail & Related papers (2022-02-18T18:07:54Z) - Physics-Informed Neural Operator for Learning Partial Differential
Equations [55.406540167010014]
PINO is the first hybrid approach incorporating data and PDE constraints at different resolutions to learn the operator.
The resulting PINO model can accurately approximate the ground-truth solution operator for many popular PDE families.
arXiv Detail & Related papers (2021-11-06T03:41:34Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.