Improving Uncertainty Quantification of Deep Classifiers via
Neighborhood Conformal Prediction: Novel Algorithm and Theoretical Analysis
- URL: http://arxiv.org/abs/2303.10694v1
- Date: Sun, 19 Mar 2023 15:56:50 GMT
- Title: Improving Uncertainty Quantification of Deep Classifiers via
Neighborhood Conformal Prediction: Novel Algorithm and Theoretical Analysis
- Authors: Subhankar Ghosh, Taha Belkhouja, Yan Yan, Janardhan Rao Doppa
- Abstract summary: Conformal prediction (CP) is a principled framework for uncertainty quantification of deep models.
This paper proposes a novel algorithm referred to as Neighborhood Conformal Prediction (NCP) to improve the efficiency of uncertainty quantification.
We show that NCP leads to significant reduction in prediction set size over prior CP methods.
- Score: 30.0231328500976
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Safe deployment of deep neural networks in high-stake real-world applications
requires theoretically sound uncertainty quantification. Conformal prediction
(CP) is a principled framework for uncertainty quantification of deep models in
the form of prediction set for classification tasks with a user-specified
coverage (i.e., true class label is contained with high probability). This
paper proposes a novel algorithm referred to as Neighborhood Conformal
Prediction (NCP) to improve the efficiency of uncertainty quantification from
CP for deep classifiers (i.e., reduce prediction set size). The key idea behind
NCP is to use the learned representation of the neural network to identify k
nearest-neighbors calibration examples for a given testing input and assign
them importance weights proportional to their distance to create adaptive
prediction sets. We theoretically show that if the learned data representation
of the neural network satisfies some mild conditions, NCP will produce smaller
prediction sets than traditional CP algorithms. Our comprehensive experiments
on CIFAR-10, CIFAR-100, and ImageNet datasets using diverse deep neural
networks strongly demonstrate that NCP leads to significant reduction in
prediction set size over prior CP methods.
Related papers
- On Temperature Scaling and Conformal Prediction of Deep Classifiers [9.975341265604577]
Two popular approaches for that aim are: 1): modifies the classifier's softmax values such that the maximal value better estimates the correctness probability; and 2) Conformal Prediction (CP): produces a prediction set of candidate labels that contains the true label with a user-specified probability.
In practice, both types of indications are desirable, yet, so far the interplay between them has not been investigated.
arXiv Detail & Related papers (2024-02-08T16:45:12Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - Probabilistically robust conformal prediction [9.401004747930974]
Conformal prediction (CP) is a framework to quantify uncertainty of machine learning classifiers including deep neural networks.
Almost all the existing work on CP assumes clean testing data and there is not much known about the robustness of CP algorithms.
This paper studies the problem of probabilistically robust conformal prediction (PRCP) which ensures robustness to most perturbations.
arXiv Detail & Related papers (2023-07-31T01:32:06Z) - Density Regression and Uncertainty Quantification with Bayesian Deep
Noise Neural Networks [4.376565880192482]
Deep neural network (DNN) models have achieved state-of-the-art predictive accuracy in a wide range of supervised learning applications.
accurately quantifying the uncertainty in DNN predictions remains a challenging task.
We propose the Bayesian Deep Noise Neural Network (B-DeepNoise), which generalizes standard Bayesian DNNs by extending the random noise variable to all hidden layers.
We evaluate B-DeepNoise against existing methods on benchmark regression datasets, demonstrating its superior performance in terms of prediction accuracy, uncertainty quantification accuracy, and uncertainty quantification efficiency.
arXiv Detail & Related papers (2022-06-12T02:47:29Z) - Robust Learning of Parsimonious Deep Neural Networks [0.0]
We propose a simultaneous learning and pruning algorithm capable of identifying and eliminating irrelevant structures in a neural network.
We derive a novel hyper-prior distribution over the prior parameters that is crucial for their optimal selection.
We evaluate the proposed algorithm on the MNIST data set and commonly used fully connected and convolutional LeNet architectures.
arXiv Detail & Related papers (2022-05-10T03:38:55Z) - Scalable computation of prediction intervals for neural networks via
matrix sketching [79.44177623781043]
Existing algorithms for uncertainty estimation require modifying the model architecture and training procedure.
This work proposes a new algorithm that can be applied to a given trained neural network and produces approximate prediction intervals.
arXiv Detail & Related papers (2022-05-06T13:18:31Z) - A Simple Approach to Improve Single-Model Deep Uncertainty via
Distance-Awareness [33.09831377640498]
We study approaches to improve uncertainty property of a single network, based on a single, deterministic representation.
We propose Spectral-normalized Neural Gaussian Process (SNGP), a simple method that improves the distance-awareness ability of modern DNNs.
On a suite of vision and language understanding benchmarks, SNGP outperforms other single-model approaches in prediction, calibration and out-of-domain detection.
arXiv Detail & Related papers (2022-05-01T05:46:13Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Predicting Deep Neural Network Generalization with Perturbation Response
Curves [58.8755389068888]
We propose a new framework for evaluating the generalization capabilities of trained networks.
Specifically, we introduce two new measures for accurately predicting generalization gaps.
We attain better predictive scores than the current state-of-the-art measures on a majority of tasks in the Predicting Generalization in Deep Learning (PGDL) NeurIPS 2020 competition.
arXiv Detail & Related papers (2021-06-09T01:37:36Z) - Amortized Conditional Normalized Maximum Likelihood: Reliable Out of
Distribution Uncertainty Estimation [99.92568326314667]
We propose the amortized conditional normalized maximum likelihood (ACNML) method as a scalable general-purpose approach for uncertainty estimation.
Our algorithm builds on the conditional normalized maximum likelihood (CNML) coding scheme, which has minimax optimal properties according to the minimum description length principle.
We demonstrate that ACNML compares favorably to a number of prior techniques for uncertainty estimation in terms of calibration on out-of-distribution inputs.
arXiv Detail & Related papers (2020-11-05T08:04:34Z) - MSE-Optimal Neural Network Initialization via Layer Fusion [68.72356718879428]
Deep neural networks achieve state-of-the-art performance for a range of classification and inference tasks.
The use of gradient combined nonvolutionity renders learning susceptible to novel problems.
We propose fusing neighboring layers of deeper networks that are trained with random variables.
arXiv Detail & Related papers (2020-01-28T18:25:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.