Extracting Usable Predictions from Quantized Networks through
Uncertainty Quantification for OOD Detection
- URL: http://arxiv.org/abs/2403.01076v1
- Date: Sat, 2 Mar 2024 03:03:29 GMT
- Title: Extracting Usable Predictions from Quantized Networks through
Uncertainty Quantification for OOD Detection
- Authors: Rishi Singhal and Srinath Srinivasan
- Abstract summary: OOD detection has become more pertinent with advances in network design and increased task complexity.
We introduce an Uncertainty Quantification(UQ) technique to quantify the uncertainty in the predictions from a pre-trained vision model.
We observe that our technique saves up to 80% of ignored samples from being misclassified.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: OOD detection has become more pertinent with advances in network design and
increased task complexity. Identifying which parts of the data a given network
is misclassifying has become as valuable as the network's overall performance.
We can compress the model with quantization, but it suffers minor performance
loss. The loss of performance further necessitates the need to derive the
confidence estimate of the network's predictions. In line with this thinking,
we introduce an Uncertainty Quantification(UQ) technique to quantify the
uncertainty in the predictions from a pre-trained vision model. We subsequently
leverage this information to extract valuable predictions while ignoring the
non-confident predictions. We observe that our technique saves up to 80% of
ignored samples from being misclassified. The code for the same is available
here.
Related papers
- Estimating Uncertainty with Implicit Quantile Network [0.0]
Uncertainty quantification is an important part of many performance critical applications.
This paper provides a simple alternative to existing approaches such as ensemble learning and bayesian neural networks.
arXiv Detail & Related papers (2024-08-26T13:33:14Z) - Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - ZigZag: Universal Sampling-free Uncertainty Estimation Through Two-Step Inference [54.17205151960878]
We introduce a sampling-free approach that is generic and easy to deploy.
We produce reliable uncertainty estimates on par with state-of-the-art methods at a significantly lower computational cost.
arXiv Detail & Related papers (2022-11-21T13:23:09Z) - CNN-based Prediction of Network Robustness With Missing Edges [0.9239657838690227]
We investigate the performance of CNN-based approaches for connectivity and controllability prediction, when partial network information is missing.
A threshold is explored that if a total amount of more than 7.29% information is lost, the performance of CNN-based prediction will be significantly degenerated.
arXiv Detail & Related papers (2022-08-25T03:36:20Z) - Training Uncertainty-Aware Classifiers with Conformalized Deep Learning [7.837881800517111]
Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty.
We develop a novel training algorithm that can lead to more dependable uncertainty estimates, without sacrificing predictive power.
arXiv Detail & Related papers (2022-05-12T05:08:10Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Learning to Predict Trustworthiness with Steep Slope Loss [69.40817968905495]
We study the problem of predicting trustworthiness on real-world large-scale datasets.
We observe that the trustworthiness predictors trained with prior-art loss functions are prone to view both correct predictions and incorrect predictions to be trustworthy.
We propose a novel steep slope loss to separate the features w.r.t. correct predictions from the ones w.r.t. incorrect predictions by two slide-like curves that oppose each other.
arXiv Detail & Related papers (2021-09-30T19:19:09Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.