Randomized prior wavelet neural operator for uncertainty quantification
- URL: http://arxiv.org/abs/2302.01051v1
- Date: Thu, 2 Feb 2023 12:28:49 GMT
- Title: Randomized prior wavelet neural operator for uncertainty quantification
- Authors: Shailesh Garg and Souvik Chakraborty
- Abstract summary: We propose a novel data-driven operator learning framework referred to as the textitRandomized Prior Wavelet Neural Operator (RP-WNO)
The proposed RP-WNO is an extension of the recently proposed wavelet neural operator, which boasts excellent generalizing capabilities but cannot estimate the uncertainty associated with its predictions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In this paper, we propose a novel data-driven operator learning framework
referred to as the \textit{Randomized Prior Wavelet Neural Operator} (RP-WNO).
The proposed RP-WNO is an extension of the recently proposed wavelet neural
operator, which boasts excellent generalizing capabilities but cannot estimate
the uncertainty associated with its predictions. RP-WNO, unlike the vanilla
WNO, comes with inherent uncertainty quantification module and hence, is
expected to be extremely useful for scientists and engineers alike. RP-WNO
utilizes randomized prior networks, which can account for prior information and
is easier to implement for large, complex deep-learning architectures than its
Bayesian counterpart. Four examples have been solved to test the proposed
framework, and the results produced advocate favorably for the efficacy of the
proposed framework.
Related papers
- Distribution free uncertainty quantification in neuroscience-inspired deep operators [1.8416014644193066]
Energy-efficient deep learning algorithms are essential for a sustainable future and feasible edge computing setups.
In this paper, we introduce the Conformalized Randomized Prior Operator (CRP-O) framework to quantify uncertainty in both conventional and spiking neural operators.
We show that the conformalized RP-VSWNO significantly enhance UQ estimates compared to vanilla RP-VSWNO, Quantile WNO (Q-WNO), and Conformalized Quantile WNO (CQ-WNO)
arXiv Detail & Related papers (2024-12-12T15:37:02Z) - Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce LUNO, a novel framework for approximate Bayesian uncertainty quantification in trained neural operators.
Our approach leverages model linearization to push (Gaussian) weight-space uncertainty forward to the neural operator's predictions.
We show that this can be interpreted as a probabilistic version of the concept of currying from functional programming, yielding a function-valued (Gaussian) random process belief.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Learning Expressive Priors for Generalization and Uncertainty Estimation
in Neural Networks [77.89179552509887]
We propose a novel prior learning method for advancing generalization and uncertainty estimation in deep neural networks.
The key idea is to exploit scalable and structured posteriors of neural networks as informative priors with generalization guarantees.
We exhaustively show the effectiveness of this method for uncertainty estimation and generalization.
arXiv Detail & Related papers (2023-07-15T09:24:33Z) - The Cascaded Forward Algorithm for Neural Network Training [61.06444586991505]
We propose a new learning framework for neural networks, namely Cascaded Forward (CaFo) algorithm, which does not rely on BP optimization as that in FF.
Unlike FF, our framework directly outputs label distributions at each cascaded block, which does not require generation of additional negative samples.
In our framework each block can be trained independently, so it can be easily deployed into parallel acceleration systems.
arXiv Detail & Related papers (2023-03-17T02:01:11Z) - Variational Bayes Deep Operator Network: A data-driven Bayesian solver
for parametric differential equations [0.0]
We propose Variational Bayes DeepONet (VB-DeepONet) for operator learning.
VB-DeepONet uses variational inference to take into account high dimensional posterior distributions.
arXiv Detail & Related papers (2022-06-12T04:20:11Z) - Non-Clairvoyant Scheduling with Predictions Revisited [77.86290991564829]
In non-clairvoyant scheduling, the task is to find an online strategy for scheduling jobs with a priori unknown processing requirements.
We revisit this well-studied problem in a recently popular learning-augmented setting that integrates (untrusted) predictions in algorithm design.
We show that these predictions have desired properties, admit a natural error measure as well as algorithms with strong performance guarantees.
arXiv Detail & Related papers (2022-02-21T13:18:11Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Bayesian Perceptron: Towards fully Bayesian Neural Networks [5.5510642465908715]
Training and predictions of a perceptron are performed within the Bayesian inference framework in closed-form.
The weights and the predictions of the perceptron are considered Gaussian random variables.
This approach requires no computationally expensive gradient calculations and further allows sequential learning.
arXiv Detail & Related papers (2020-09-03T15:08:49Z) - Estimation with Uncertainty via Conditional Generative Adversarial
Networks [3.829070379776576]
We propose a predictive probabilistic neural network model, which corresponds to a different manner of using the generator in conditional Generative Adversarial Network (cGAN)
By reversing the input and output of ordinary cGAN, the model can be successfully used as a predictive model.
In addition, to measure the uncertainty of predictions, we introduce the entropy and relative entropy for regression problems and classification problems.
arXiv Detail & Related papers (2020-07-01T08:54:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.