Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for
Specialized Tasks
- URL: http://arxiv.org/abs/2402.19460v1
- Date: Thu, 29 Feb 2024 18:52:56 GMT
- Title: Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for
Specialized Tasks
- Authors: B\'alint Mucs\'anyi and Michael Kirchhof and Seong Joon Oh
- Abstract summary: This paper conducts a comprehensive evaluation of numerous uncertainty estimators across diverse tasks on ImageNet.
We find that, despite promising theoretical endeavors, disentanglement is not yet achieved in practice.
We reveal which uncertainty estimators excel at which specific tasks, providing insights for practitioners.
- Score: 19.945932368701722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Uncertainty quantification, once a singular task, has evolved into a spectrum
of tasks, including abstained prediction, out-of-distribution detection, and
aleatoric uncertainty quantification. The latest goal is disentanglement: the
construction of multiple estimators that are each tailored to one and only one
task. Hence, there is a plethora of recent advances with different intentions -
that often entirely deviate from practical behavior. This paper conducts a
comprehensive evaluation of numerous uncertainty estimators across diverse
tasks on ImageNet. We find that, despite promising theoretical endeavors,
disentanglement is not yet achieved in practice. Additionally, we reveal which
uncertainty estimators excel at which specific tasks, providing insights for
practitioners and guiding future research toward task-centric and disentangled
uncertainty estimation methods. Our code is available at
https://github.com/bmucsanyi/bud.
Related papers
- CLUE: Concept-Level Uncertainty Estimation for Large Language Models [49.92690111618016]
We propose a novel framework for Concept-Level Uncertainty Estimation for Large Language Models (LLMs)
We leverage LLMs to convert output sequences into concept-level representations, breaking down sequences into individual concepts and measuring the uncertainty of each concept separately.
We conduct experiments to demonstrate that CLUE can provide more interpretable uncertainty estimation results compared with sentence-level uncertainty.
arXiv Detail & Related papers (2024-09-04T18:27:12Z) - Calibrated Uncertainty Quantification for Operator Learning via
Conformal Prediction [95.75771195913046]
We propose a risk-controlling quantile neural operator, a distribution-free, finite-sample functional calibration conformal prediction method.
We provide a theoretical calibration guarantee on the coverage rate, defined as the expected percentage of points on the function domain.
Empirical results on a 2D Darcy flow and a 3D car surface pressure prediction task validate our theoretical results.
arXiv Detail & Related papers (2024-02-02T23:43:28Z) - Multidimensional Belief Quantification for Label-Efficient Meta-Learning [7.257751371276488]
We propose a novel uncertainty-aware task selection model for label efficient meta-learning.
The proposed model formulates a multidimensional belief measure, which can quantify the known uncertainty and lower bound the unknown uncertainty of any given task.
Experiments conducted over multiple real-world few-shot image classification tasks demonstrate the effectiveness of the proposed model.
arXiv Detail & Related papers (2022-03-23T23:37:16Z) - See Yourself in Others: Attending Multiple Tasks for Own Failure
Detection [28.787334666116518]
We propose an attention-based failure detection approach by exploiting the correlations among multiple tasks.
The proposed framework infers task failures by evaluating the individual prediction, across multiple visual perception tasks for different regions in an image.
Our proposed framework generates more accurate estimations of the prediction error for the different task's predictions.
arXiv Detail & Related papers (2021-10-06T07:42:57Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Exploring Uncertainty in Deep Learning for Construction of Prediction
Intervals [27.569681578957645]
We explore the uncertainty in deep learning to construct prediction intervals.
We design a special loss function, which enables us to learn uncertainty without uncertainty label.
Our method correlates the construction of prediction intervals with the uncertainty estimation.
arXiv Detail & Related papers (2021-04-27T02:58:20Z) - Temporal Difference Uncertainties as a Signal for Exploration [76.6341354269013]
An effective approach to exploration in reinforcement learning is to rely on an agent's uncertainty over the optimal policy.
In this paper, we highlight that value estimates are easily biased and temporally inconsistent.
We propose a novel method for estimating uncertainty over the value function that relies on inducing a distribution over temporal difference errors.
arXiv Detail & Related papers (2020-10-05T18:11:22Z) - Robust Learning Through Cross-Task Consistency [92.42534246652062]
We propose a broadly applicable and fully computational method for augmenting learning with Cross-Task Consistency.
We observe that learning with cross-task consistency leads to more accurate predictions and better generalization to out-of-distribution inputs.
arXiv Detail & Related papers (2020-06-07T09:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.