Uncertainty Quantification with Proper Scoring Rules: Adjusting Measures to Prediction Tasks
- URL: http://arxiv.org/abs/2505.22538v1
- Date: Wed, 28 May 2025 16:22:53 GMT
- Title: Uncertainty Quantification with Proper Scoring Rules: Adjusting Measures to Prediction Tasks
- Authors: Paul Hofman, Yusuf Sale, Eyke Hüllermeier,
- Abstract summary: We propose measures of uncertainty based on a known decomposition of (strictly) proper scoring rules, a specific type of loss function, into a divergence and an entropy component.<n>This leads to a flexible framework for uncertainty quantification that can be instantiated with different losses (scoring rules)<n>We show that this flexibility is indeed advantageous. In particular, we analyze the task of selective prediction and show that the scoring rule should ideally match the task loss.
- Score: 19.221081896134567
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We address the problem of uncertainty quantification and propose measures of total, aleatoric, and epistemic uncertainty based on a known decomposition of (strictly) proper scoring rules, a specific type of loss function, into a divergence and an entropy component. This leads to a flexible framework for uncertainty quantification that can be instantiated with different losses (scoring rules), which makes it possible to tailor uncertainty quantification to the use case at hand. We show that this flexibility is indeed advantageous. In particular, we analyze the task of selective prediction and show that the scoring rule should ideally match the task loss. In addition, we perform experiments on two other common tasks. For out-of-distribution detection, our results confirm that a widely used measure of epistemic uncertainty, mutual information, performs best. Moreover, in the setting of active learning, our measure of epistemic uncertainty based on the zero-one-loss consistently outperforms other uncertainty measures.
Related papers
- SConU: Selective Conformal Uncertainty in Large Language Models [59.25881667640868]
We propose a novel approach termed Selective Conformal Uncertainty (SConU)<n>We develop two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level.<n>Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions.
arXiv Detail & Related papers (2025-04-19T03:01:45Z) - Probabilistic Modeling of Disparity Uncertainty for Robust and Efficient Stereo Matching [61.73532883992135]
We propose a new uncertainty-aware stereo matching framework.<n>We adopt Bayes risk as the measurement of uncertainty and use it to separately estimate data and model uncertainty.
arXiv Detail & Related papers (2024-12-24T23:28:20Z) - On Information-Theoretic Measures of Predictive Uncertainty [5.8034373350518775]
Despite its significance, a consensus on the correct measurement of predictive uncertainty remains elusive.
Our proposed framework categorizes predictive uncertainty measures according to two factors: (I) The predicting model (II) The approximation of the true predictive distribution.
We empirically evaluate these measures in typical uncertainty estimation settings, such as misclassification detection, selective prediction, and out-of-distribution detection.
arXiv Detail & Related papers (2024-10-14T17:52:18Z) - Label-wise Aleatoric and Epistemic Uncertainty Quantification [15.642370299038488]
We present a novel approach to uncertainty quantification in classification tasks based on label-wise decomposition of uncertainty measures.
We show that our proposed measures adhere to a number of desirable properties.
arXiv Detail & Related papers (2024-06-04T14:33:23Z) - Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring Rules [19.221081896134567]
Uncertainty representation and quantification are paramount in machine learning.
We propose measures for the quantification of aleatoric and (epistemic) uncertainty based on proper scoring rules.
arXiv Detail & Related papers (2024-04-18T14:20:19Z) - Benchmarking Uncertainty Disentanglement: Specialized Uncertainties for Specialized Tasks [17.00971204252757]
We reimplement and evaluate a comprehensive range of uncertainty estimators on ImageNet.<n>We find that, despite recent theoretical endeavors, no existing approach provides pairs of disentangled uncertainty estimators in practice.<n>Our results provide both practical advice for which uncertainty estimators to use for which specific task, and reveal opportunities for future research toward task-centric and disentangled uncertainties.
arXiv Detail & Related papers (2024-02-29T18:52:56Z) - Understanding Uncertainty Sampling [7.32527270949303]
Uncertainty sampling is a prevalent active learning algorithm that queries sequentially the annotations of data samples.
We propose a notion of equivalent loss which depends on the used uncertainty measure and the original loss function.
We provide the first generalization bound for uncertainty sampling algorithms under both stream-based and pool-based settings.
arXiv Detail & Related papers (2023-07-06T01:57:37Z) - Model-Based Uncertainty in Value Functions [89.31922008981735]
We focus on characterizing the variance over values induced by a distribution over MDPs.
Previous work upper bounds the posterior variance over values by solving a so-called uncertainty Bellman equation.
We propose a new uncertainty Bellman equation whose solution converges to the true posterior variance over values.
arXiv Detail & Related papers (2023-02-24T09:18:27Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.