Residual-Based Error Corrector Operator to Enhance Accuracy and
Reliability of Neural Operator Surrogates of Nonlinear Variational
Boundary-Value Problems
- URL: http://arxiv.org/abs/2306.12047v3
- Date: Thu, 16 Nov 2023 01:35:24 GMT
- Title: Residual-Based Error Corrector Operator to Enhance Accuracy and
Reliability of Neural Operator Surrogates of Nonlinear Variational
Boundary-Value Problems
- Authors: Prashant K. Jha
- Abstract summary: This work focuses on developing methods for approximating the solution operators of a class of parametric partial differential equations via neural operators.
The unpredictability of the accuracy of neural operators impacts their applications in downstream problems of inference, optimization, and control.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work focuses on developing methods for approximating the solution
operators of a class of parametric partial differential equations via neural
operators. Neural operators have several challenges, including the issue of
generating appropriate training data, cost-accuracy trade-offs, and nontrivial
hyperparameter tuning. The unpredictability of the accuracy of neural operators
impacts their applications in downstream problems of inference, optimization,
and control. A framework based on the linear variational problem that gives the
correction to the prediction furnished by neural operators is considered based
on earlier work in JCP 486 (2023) 112104. The operator, called Residual-based
Error Corrector Operator or simply Corrector Operator, associated with the
corrector problem is analyzed further. Numerical results involving a nonlinear
reaction-diffusion model in two dimensions with PCANet-type neural operators
show almost two orders of increase in the accuracy of approximations when
neural operators are corrected using the correction scheme. Further, topology
optimization involving a nonlinear reaction-diffusion model is considered to
highlight the limitations of neural operators and the efficacy of the
correction scheme. Optimizers with neural operator surrogates are seen to make
significant errors (as high as 80 percent). However, the errors are much lower
(below 7 percent) when neural operators are corrected.
Related papers
- Error Feedback under $(L_0,L_1)$-Smoothness: Normalization and Momentum [56.37522020675243]
We provide the first proof of convergence for normalized error feedback algorithms across a wide range of machine learning problems.
We show that due to their larger allowable stepsizes, our new normalized error feedback algorithms outperform their non-normalized counterparts on various tasks.
arXiv Detail & Related papers (2024-10-22T10:19:27Z) - Towards Gaussian Process for operator learning: an uncertainty aware resolution independent operator learning algorithm for computational mechanics [8.528817025440746]
This paper introduces a novel Gaussian Process (GP) based neural operator for solving parametric differential equations.
We propose a neural operator-embedded kernel'' wherein the GP kernel is formulated in the latent space learned using a neural operator.
Our results highlight the efficacy of this framework in solving complex PDEs while maintaining robustness in uncertainty estimation.
arXiv Detail & Related papers (2024-09-17T08:12:38Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Improved Operator Learning by Orthogonal Attention [17.394770071994145]
We develop an attention based on the eigendecomposition of the kernel integral operator and the neural approximation of eigenfunctions.
Our method can outperform competing baselines with decent margins.
arXiv Detail & Related papers (2023-10-19T05:47:28Z) - Guaranteed Approximation Bounds for Mixed-Precision Neural Operators [83.64404557466528]
We build on intuition that neural operator learning inherently induces an approximation error.
We show that our approach reduces GPU memory usage by up to 50% and improves throughput by 58% with little or no reduction in accuracy.
arXiv Detail & Related papers (2023-07-27T17:42:06Z) - Residual-based error correction for neural operator accelerated
infinite-dimensional Bayesian inverse problems [3.2548794659022393]
We explore using neural operators, or neural network representations of nonlinear maps between function spaces, to accelerate infinite-dimensional Bayesian inverse problems.
We show that a trained neural operator with error correction can achieve a quadratic reduction of its approximation error.
We demonstrate that posterior representations of two BIPs produced using trained neural operators are greatly and consistently enhanced by error correction.
arXiv Detail & Related papers (2022-10-06T15:57:22Z) - Neural Operator: Learning Maps Between Function Spaces [75.93843876663128]
We propose a generalization of neural networks to learn operators, termed neural operators, that map between infinite dimensional function spaces.
We prove a universal approximation theorem for our proposed neural operator, showing that it can approximate any given nonlinear continuous operator.
An important application for neural operators is learning surrogate maps for the solution operators of partial differential equations.
arXiv Detail & Related papers (2021-08-19T03:56:49Z) - Relaxing the Constraints on Predictive Coding Models [62.997667081978825]
Predictive coding is an influential theory of cortical function which posits that the principal computation the brain performs is the minimization of prediction errors.
Standard implementations of the algorithm still involve potentially neurally implausible features such as identical forward and backward weights, backward nonlinear derivatives, and 1-1 error unit connectivity.
In this paper, we show that these features are not integral to the algorithm and can be removed either directly or through learning additional sets of parameters with Hebbian update rules without noticeable harm to learning performance.
arXiv Detail & Related papers (2020-10-02T15:21:37Z) - Neural Control Variates [71.42768823631918]
We show that a set of neural networks can face the challenge of finding a good approximation of the integrand.
We derive a theoretically optimal, variance-minimizing loss function, and propose an alternative, composite loss for stable online training in practice.
Specifically, we show that the learned light-field approximation is of sufficient quality for high-order bounces, allowing us to omit the error correction and thereby dramatically reduce the noise at the cost of negligible visible bias.
arXiv Detail & Related papers (2020-06-02T11:17:55Z) - Understanding and Mitigating the Tradeoff Between Robustness and
Accuracy [88.51943635427709]
Adversarial training augments the training set with perturbations to improve the robust error.
We show that the standard error could increase even when the augmented perturbations have noiseless observations from the optimal linear predictor.
arXiv Detail & Related papers (2020-02-25T08:03:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.