Ensemble-based Uncertainty Quantification: Bayesian versus Credal
Inference
- URL: http://arxiv.org/abs/2107.10384v1
- Date: Wed, 21 Jul 2021 22:47:24 GMT
- Title: Ensemble-based Uncertainty Quantification: Bayesian versus Credal
Inference
- Authors: Mohammad Hossein Shaker and Eyke H\"ullermeier
- Abstract summary: We consider ensemble-based approaches to uncertainty quantification.
We specifically focus on Bayesian methods and approaches based on so-called credal sets.
The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The idea to distinguish and quantify two important types of uncertainty,
often referred to as aleatoric and epistemic, has received increasing attention
in machine learning research in the last couple of years. In this paper, we
consider ensemble-based approaches to uncertainty quantification.
Distinguishing between different types of uncertainty-aware learning
algorithms, we specifically focus on Bayesian methods and approaches based on
so-called credal sets, which naturally suggest themselves from an ensemble
learning point of view. For both approaches, we address the question of how to
quantify aleatoric and epistemic uncertainty. The effectiveness of
corresponding measures is evaluated and compared in an empirical study on
classification with a reject option.
Related papers
- Label-wise Aleatoric and Epistemic Uncertainty Quantification [15.642370299038488]
We present a novel approach to uncertainty quantification in classification tasks based on label-wise decomposition of uncertainty measures.
We show that our proposed measures adhere to a number of desirable properties.
arXiv Detail & Related papers (2024-06-04T14:33:23Z) - A unified uncertainty-aware exploration: Combining epistemic and
aleatory uncertainty [21.139502047972684]
We propose an algorithm that quantifies the combined effect of aleatory and epistemic uncertainty for risk-sensitive exploration.
Our method builds on a novel extension of distributional RL that estimates a parameterized return distribution.
Experimental results on tasks with exploration and risk challenges show that our method outperforms alternative approaches.
arXiv Detail & Related papers (2024-01-05T17:39:00Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - Measuring Classification Decision Certainty and Doubt [61.13511467941388]
We propose intuitive scores, which we call certainty and doubt, to assess and compare the quality and uncertainty of predictions in (multi-)classification decision machine learning problems.
arXiv Detail & Related papers (2023-03-25T21:31:41Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z) - Aleatoric and Epistemic Uncertainty with Random Forests [3.1410342959104725]
We show how two approaches for measuring the learner's aleatoric and epistemic uncertainty in a prediction can be instantiated with decision trees and random forests.
In this paper, we also compare random forests with deep neural networks, which have been used for a similar purpose.
arXiv Detail & Related papers (2020-01-03T17:08:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.