Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?
- URL: http://arxiv.org/abs/2306.09586v1
- Date: Fri, 16 Jun 2023 02:17:45 GMT
- Title: Is the Volume of a Credal Set a Good Measure for Epistemic Uncertainty?
- Authors: Yusuf Sale, Michele Caprio, Eyke H\"ullermeier
- Abstract summary: Uncertainty measures have become imperative in various disciplines, especially in machine learning and intelligence.
As an alternative to a measure of uncertainty, we consider credal sets.
We show that a credal set is a meaningful measure of uncertainty in the case of classification but less so for multi-class classification.
- Score: 2.658812114255374
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adequate uncertainty representation and quantification have become imperative
in various scientific disciplines, especially in machine learning and
artificial intelligence. As an alternative to representing uncertainty via one
single probability measure, we consider credal sets (convex sets of probability
measures). The geometric representation of credal sets as $d$-dimensional
polytopes implies a geometric intuition about (epistemic) uncertainty. In this
paper, we show that the volume of the geometric representation of a credal set
is a meaningful measure of epistemic uncertainty in the case of binary
classification, but less so for multi-class classification. Our theoretical
findings highlight the crucial role of specifying and employing uncertainty
measures in machine learning in an appropriate way, and for being aware of
possible pitfalls.
Related papers
- Certainly Uncertain: A Benchmark and Metric for Multimodal Epistemic and Aleatoric Awareness [106.52630978891054]
We present a taxonomy of uncertainty specific to vision-language AI systems.
We also introduce a new metric confidence-weighted accuracy, that is well correlated with both accuracy and calibration error.
arXiv Detail & Related papers (2024-07-02T04:23:54Z) - To Believe or Not to Believe Your LLM [51.2579827761899]
We explore uncertainty quantification in large language models (LLMs)
We derive an information-theoretic metric that allows to reliably detect when only epistemic uncertainty is large.
We conduct a series of experiments which demonstrate the advantage of our formulation.
arXiv Detail & Related papers (2024-06-04T17:58:18Z) - A comparative study of conformal prediction methods for valid uncertainty quantification in machine learning [0.0]
dissertation tries to further the quest for a world where everyone is aware of uncertainty, of how important it is and how to embrace it instead of fearing it.
A specific, though general, framework that allows anyone to obtain accurate uncertainty estimates is singled out and analysed.
arXiv Detail & Related papers (2024-05-03T13:19:33Z) - Quantifying Aleatoric and Epistemic Uncertainty with Proper Scoring Rules [19.221081896134567]
Uncertainty representation and quantification are paramount in machine learning.
We propose measures for the quantification of aleatoric and (epistemic) uncertainty based on proper scoring rules.
arXiv Detail & Related papers (2024-04-18T14:20:19Z) - Conformalized Credal Set Predictors [12.549746646074071]
Credal sets are sets of probability distributions that are considered as candidates for an imprecisely known ground-truth distribution.
We make use of conformal prediction for learning credal set predictors.
We demonstrate the applicability of our method to natural language inference.
arXiv Detail & Related papers (2024-02-16T14:30:12Z) - Sources of Uncertainty in Machine Learning -- A Statisticians' View [3.1498833540989413]
The paper aims to formalize the two types of uncertainty associated with machine learning.
Drawing parallels between statistical concepts and uncertainty in machine learning, we also demonstrate the role of data and their influence on uncertainty.
arXiv Detail & Related papers (2023-05-26T07:44:19Z) - Quantifying Aleatoric and Epistemic Uncertainty in Machine Learning: Are
Conditional Entropy and Mutual Information Appropriate Measures? [2.1655448059430222]
quantify aleatoric and epistemic uncertainty in terms of conditional entropy and mutual information.
We identify various incoherencies that call their appropriateness into question.
Experiments across different computer vision tasks support our theoretical findings.
arXiv Detail & Related papers (2022-09-07T17:05:20Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - Don't Just Blame Over-parametrization for Over-confidence: Theoretical
Analysis of Calibration in Binary Classification [58.03725169462616]
We show theoretically that over-parametrization is not the only reason for over-confidence.
We prove that logistic regression is inherently over-confident, in the realizable, under-parametrized setting.
Perhaps surprisingly, we also show that over-confidence is not always the case.
arXiv Detail & Related papers (2021-02-15T21:38:09Z) - Cautious Active Clustering [79.23797234241471]
We consider the problem of classification of points sampled from an unknown probability measure on a Euclidean space.
Our approach is to consider the unknown probability measure as a convex combination of the conditional probabilities for each class.
arXiv Detail & Related papers (2020-08-03T23:47:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.