How certain are your uncertainties?
- URL: http://arxiv.org/abs/2203.00238v1
- Date: Tue, 1 Mar 2022 05:25:02 GMT
- Title: How certain are your uncertainties?
- Authors: Luke Whitbread and Mark Jenkinson
- Abstract summary: Measures of uncertainty in the output of a deep learning method are useful in several ways.
This work investigates the stability of these uncertainty measurements, in terms of both magnitude and spatial pattern.
- Score: 0.3655021726150368
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Having a measure of uncertainty in the output of a deep learning method is
useful in several ways, such as in assisting with interpretation of the
outputs, helping build confidence with end users, and for improving the
training and performance of the networks. Therefore, several different methods
have been proposed to capture various types of uncertainty, including epistemic
(relating to the model used) and aleatoric (relating to the data) sources, with
the most commonly used methods for estimating these being test-time dropout for
epistemic uncertainty and test-time augmentation for aleatoric uncertainty.
However, these methods are parameterised (e.g. amount of dropout or type and
level of augmentation) and so there is a whole range of possible uncertainties
that could be calculated, even with a fixed network and dataset. This work
investigates the stability of these uncertainty measurements, in terms of both
magnitude and spatial pattern. In experiments using the well characterised
BraTS challenge, we demonstrate substantial variability in the magnitude and
spatial pattern of these uncertainties, and discuss the implications for
interpretability, repeatability and confidence in results.
Related papers
- Temporal Distribution Shift in Real-World Pharmaceutical Data: Implications for Uncertainty Quantification in QSAR Models [1.9354018523009415]
Several computational tools exist that estimate the predictive uncertainty in machine learning models.
deviations from the i.i.d. setting have been shown to impair the performance of these uncertainty quantification methods.
We use a real-world pharmaceutical dataset to address the pressing need for a comprehensive, large-scale evaluation of uncertainty estimation methods.
arXiv Detail & Related papers (2025-02-06T11:26:04Z) - Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - From Risk to Uncertainty: Generating Predictive Uncertainty Measures via Bayesian Estimation [5.355925496689674]
We build a framework that allows one to generate different predictive uncertainty measures.
We validate our method on image datasets by evaluating its performance in detecting out-of-distribution and misclassified instances.
arXiv Detail & Related papers (2024-02-16T14:40:22Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement [7.6146285961466]
In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods.
We show that there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty.
We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties.
arXiv Detail & Related papers (2022-04-20T08:41:37Z) - Decomposing Representations for Deterministic Uncertainty Estimation [34.11413246048065]
We show that current feature density based uncertainty estimators cannot perform well consistently across different OoD detection settings.
We propose to decompose the learned representations and integrate the uncertainties estimated on them separately.
arXiv Detail & Related papers (2021-12-01T22:12:01Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.