Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for
Specialized Tasks
- URL: http://arxiv.org/abs/2402.19460v1
- Date: Thu, 29 Feb 2024 18:52:56 GMT
- Title: Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for
Specialized Tasks
- Authors: B\'alint Mucs\'anyi and Michael Kirchhof and Seong Joon Oh
- Abstract summary: This paper conducts a comprehensive evaluation of numerous uncertainty estimators across diverse tasks on ImageNet.
We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice.
We reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners.
- Score: 19.945932368701722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification, once a singular task, has evolved into a spectrum
of tasks, including abstained prediction, out-of-distribution detection, and
aleatoric uncertainty quantification. The latest goal is disentanglement: the
construction of multiple estimators that are each tailored to one and only one
task. Hence, there is a plethora of recent advances with different intentions -
that often entirely deviate from practical behavior. This paper conducts a
comprehensive evaluation of numerous uncertainty estimators across diverse
tasks on ImageNet. We find that, despite promising theoretical endeavors,
disentanglement is not yet achieved in practice. Additionally, we reveal which
uncertainty estimators excel at which specific tasks, providing insights for
practitioners and guiding future research toward task-centric and disentangled
uncertainty estimation methods. Our code is available at
https://github.com/bmucsanyi/bud.
Related papers
- Uncertainty Quantification in Stereo Matching [61.73532883992135]
We propose a new framework for stereo matching and its uncertainty quantification.
We adopt Bayes risk as a measure of uncertainty and estimate data and model uncertainty separately.
We apply our uncertainty method to improve prediction accuracy by selecting data points with small uncertainties.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - FairlyUncertain: A Comprehensive Benchmark of Uncertainty in Algorithmic Fairness [4.14360329494344]
We introduce FairlyUncertain, an axiomatic benchmark for evaluating uncertainty estimates in fairness.
Our benchmark posits that fair predictive uncertainty estimates should be consistent across learning pipelines and calibrated to observed randomness.
arXiv Detail & Related papers (2024-10-02T20:15:29Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - A Deeper Look into Aleatoric and Epistemic Uncertainty Disentanglement [7.6146285961466]
In this paper, we generalize methods to produce disentangled uncertainties to work with different uncertainty quantification methods.
We show that there is an interaction between learning aleatoric and epistemic uncertainty, which is unexpected and violates assumptions on aleatoric uncertainty.
We expect that our formulation and results help practitioners and researchers choose uncertainty methods and expand the use of disentangled uncertainties.
arXiv Detail & Related papers (2022-04-20T08:41:37Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Logit-based Uncertainty Measure in Classification [18.224344440110862]
We introduce a new, reliable, and agnostic uncertainty measure for classification tasks called logit uncertainty.
We show that this new uncertainty measure yields a superior performance compared to existing uncertainty measures on different tasks.
arXiv Detail & Related papers (2021-07-06T19:07:16Z) - Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals [27.569681578957645]
We explore the uncertainty in deep learning to construct prediction intervals.
We design a special loss function, which enables us to learn uncertainty without uncertainty label.
Our method correlates the construction of prediction intervals with the uncertainty estimation.
arXiv Detail & Related papers (2021-04-27T02:58:20Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.