A Kernel Framework to Quantify a Model's Local Predictive Uncertainty
under Data Distributional Shifts
- URL: http://arxiv.org/abs/2103.01374v1
- Date: Tue, 2 Mar 2021 00:31:53 GMT
- Title: A Kernel Framework to Quantify a Model's Local Predictive Uncertainty
under Data Distributional Shifts
- Authors: Rishabh Singh and Jose C. Principe
- Abstract summary: Internal layer outputs of a trained neural network contain all of the information related to both its mapping function and its input data distribution.
We propose a framework for predictive uncertainty quantification of a trained neural network that explicitly estimates the PDF of its raw prediction space.
The kernel framework is observed to provide model uncertainty estimates with much greater precision based on the ability to detect model prediction errors.
- Score: 21.591460685054546
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traditional Bayesian approaches for model uncertainty quantification rely on
notoriously difficult processes of marginalization over each network parameter
to estimate its probability density function (PDF). Our hypothesis is that
internal layer outputs of a trained neural network contain all of the
information related to both its mapping function (quantified by its weights) as
well as the input data distribution. We therefore propose a framework for
predictive uncertainty quantification of a trained neural network that
explicitly estimates the PDF of its raw prediction space (before activation),
p(y'|x,w), which we refer to as the model PDF, in a Gaussian reproducing kernel
Hilbert space (RKHS). The Gaussian RKHS provides a localized density estimate
of p(y'|x,w), which further enables us to utilize gradient based formulations
of quantum physics to decompose the model PDF in terms of multiple local
uncertainty moments that provide much greater resolution of the PDF than the
central moments characterized by Bayesian methods. This provides the framework
with a better ability to detect distributional shifts in test data away from
the training data PDF learned by the model. We evaluate the framework against
existing uncertainty quantification methods on benchmark datasets that have
been corrupted using common perturbation techniques. The kernel framework is
observed to provide model uncertainty estimates with much greater precision
based on the ability to detect model prediction errors.
Related papers
- Inflationary Flows: Calibrated Bayesian Inference with Diffusion-Based Models [0.0]
We show how diffusion-based models can be repurposed for performing principled, identifiable Bayesian inference.
We show how such maps can be learned via standard DBM training using a novel noise schedule.
The result is a class of highly expressive generative models, uniquely defined on a low-dimensional latent space.
arXiv Detail & Related papers (2024-07-11T19:58:19Z) - Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Quantifying Model Predictive Uncertainty with Perturbation Theory [21.591460685054546]
We propose a framework for predictive uncertainty quantification of a neural network.
We use perturbation theory from quantum physics to formulate a moment decomposition problem.
Our approach provides fast model predictive uncertainty estimates with much greater precision and calibration.
arXiv Detail & Related papers (2021-09-22T17:55:09Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Uncertainty Estimation Using a Single Deep Deterministic Neural Network [66.26231423824089]
We propose a method for training a deterministic deep model that can find and reject out of distribution data points at test time with a single forward pass.
We scale training in these with a novel loss function and centroid updating scheme and match the accuracy of softmax models.
arXiv Detail & Related papers (2020-03-04T12:27:36Z) - Towards a Kernel based Uncertainty Decomposition Framework for Data and
Models [20.348825818435767]
This paper introduces a new framework for quantifying predictive uncertainty for both data and models.
We apply this framework as a surrogate tool for predictive uncertainty quantification of point-prediction neural network models.
arXiv Detail & Related papers (2020-01-30T18:35:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.