Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations
- URL: http://arxiv.org/abs/2310.16587v1
- Date: Wed, 25 Oct 2023 12:22:18 GMT
- Title: Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations
- Authors: Tsai Hor Chan, Kin Wai Lau, Jiajun Shen, Guosheng Yin, Lequan Yu
- Abstract summary: Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
- Score: 28.875819909902244
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty estimation aims to evaluate the confidence of a trained deep
neural network. However, existing uncertainty estimation approaches rely on
low-dimensional distributional assumptions and thus suffer from the high
dimensionality of latent features. Existing approaches tend to focus on
uncertainty on discrete classification probabilities, which leads to poor
generalizability to uncertainty estimation for other tasks. Moreover, most of
the literature requires seeing the out-of-distribution (OOD) data in the
training for better estimation of uncertainty, which limits the uncertainty
estimation performance in practice because the OOD data are typically unseen.
To overcome these limitations, we propose a new framework using data-adaptive
high-dimensional hypothesis testing for uncertainty estimation, which leverages
the statistical properties of the feature representations. Our method directly
operates on latent representations and thus does not require retraining the
feature encoder under a modified objective. The test statistic relaxes the
feature distribution assumptions to high dimensionality, and it is more
discriminative to uncertainties in the latent representations. We demonstrate
that encoding features with Bayesian neural networks can enhance testing
performance and lead to more accurate uncertainty estimation. We further
introduce a family-wise testing procedure to determine the optimal threshold of
OOD detection, which minimizes the false discovery rate (FDR). Extensive
experiments validate the satisfactory performance of our framework on
uncertainty estimation and task-specific prediction over a variety of
competitors. The experiments on the OOD detection task also show satisfactory
performance of our method when the OOD data are unseen in the training. Codes
are available at https://github.com/HKU-MedAI/bnn_uncertainty.
Related papers
- Gradient-based Uncertainty Attribution for Explainable Bayesian Deep
Learning [38.34033824352067]
Predictions made by deep learning models are prone to data perturbations, adversarial attacks, and out-of-distribution inputs.
We propose to develop explainable and actionable Bayesian deep learning methods to perform accurate uncertainty quantification.
arXiv Detail & Related papers (2023-04-10T19:14:15Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Uncertainty-Aware Reliable Text Classification [21.517852608625127]
Deep neural networks have significantly contributed to the success in predictive accuracy for classification tasks.
They tend to make over-confident predictions in real-world settings, where domain shifting and out-of-distribution examples exist.
We propose an inexpensive framework that adopts both auxiliary outliers and pseudo off-manifold samples to train the model with prior knowledge of a certain class.
arXiv Detail & Related papers (2021-07-15T04:39:55Z) - On the Practicality of Deterministic Epistemic Uncertainty [106.06571981780591]
deterministic uncertainty methods (DUMs) achieve strong performance on detecting out-of-distribution data.
It remains unclear whether DUMs are well calibrated and can seamlessly scale to real-world applications.
arXiv Detail & Related papers (2021-07-01T17:59:07Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Approaching Neural Network Uncertainty Realism [53.308409014122816]
Quantifying or at least upper-bounding uncertainties is vital for safety-critical systems such as autonomous vehicles.
We evaluate uncertainty realism -- a strict quality criterion -- with a Mahalanobis distance-based statistical test.
We adopt it to the automotive domain and show that it significantly improves uncertainty realism compared to a plain encoder-decoder model.
arXiv Detail & Related papers (2021-01-08T11:56:12Z) - Know Your Limits: Uncertainty Estimation with ReLU Classifiers Fails at
Reliable OOD Detection [0.0]
This paper gives a theoretical explanation for said experimental findings and illustrates it on synthetic data.
We prove that such techniques are not able to reliably identify OOD samples in a classification setting.
arXiv Detail & Related papers (2020-12-09T21:35:55Z) - The Hidden Uncertainty in a Neural Networks Activations [105.4223982696279]
The distribution of a neural network's latent representations has been successfully used to detect out-of-distribution (OOD) data.
This work investigates whether this distribution correlates with a model's epistemic uncertainty, thus indicating its ability to generalise to novel inputs.
arXiv Detail & Related papers (2020-12-05T17:30:35Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Uncertainty-Based Out-of-Distribution Classification in Deep
Reinforcement Learning [17.10036674236381]
Wrong predictions for out-of-distribution data can cause safety critical situations in machine learning systems.
We propose a framework for uncertainty-based OOD classification: UBOOD.
We show that UBOOD produces reliable classification results when combined with ensemble-based estimators.
arXiv Detail & Related papers (2019-12-31T09:52:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.