Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning
- URL: http://arxiv.org/abs/2002.06470v4
- Date: Sun, 18 Jul 2021 16:17:28 GMT
- Title: Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning
- Authors: Arsenii Ashukha, Alexander Lyzhov, Dmitry Molchanov, Dmitry Vetrov
- Abstract summary: In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
- Score: 70.72363097550483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncertainty estimation and ensembling methods go hand-in-hand. Uncertainty
estimation is one of the main benchmarks for assessment of ensembling
performance. At the same time, deep learning ensembles have provided
state-of-the-art results in uncertainty estimation. In this work, we focus on
in-domain uncertainty for image classification. We explore the standards for
its quantification and point out pitfalls of existing metrics. Avoiding these
pitfalls, we perform a broad study of different ensembling techniques. To
provide more insight in this study, we introduce the deep ensemble equivalent
score (DEE) and show that many sophisticated ensembling techniques are
equivalent to an ensemble of only few independently trained networks in terms
of test performance.
Related papers
- Uncertainty Estimation in Instance Segmentation with Star-convex Shapes [4.197316670989004]
Deep neural network-based algorithms often exhibit incorrect predictions with unwarranted confidence levels.
Our study addresses the challenge of estimating spatial certainty with the location of instances with star- shapes.
Our study demonstrates that combining fractional certainty estimation over individual certainty scores is an effective strategy.
arXiv Detail & Related papers (2023-09-19T10:49:33Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - What is Flagged in Uncertainty Quantification? Latent Density Models for
Uncertainty Categorization [68.15353480798244]
Uncertainty Quantification (UQ) is essential for creating trustworthy machine learning models.
Recent years have seen a steep rise in UQ methods that can flag suspicious examples.
We propose a framework for categorizing uncertain examples flagged by UQ methods in classification tasks.
arXiv Detail & Related papers (2022-07-11T19:47:00Z) - Which models are innately best at uncertainty estimation? [15.929238800072195]
Deep neural networks must be equipped with an uncertainty estimation mechanism when deployed for risk-sensitive tasks.
This paper studies the relationship between deep architectures and their training regimes with their corresponding selective prediction and uncertainty estimation performance.
arXiv Detail & Related papers (2022-06-05T11:15:35Z) - Ensemble-based Uncertainty Quantification: Bayesian versus Credal
Inference [0.0]
We consider ensemble-based approaches to uncertainty quantification.
We specifically focus on Bayesian methods and approaches based on so-called credal sets.
The effectiveness of corresponding measures is evaluated and compared in an empirical study on classification with a reject option.
arXiv Detail & Related papers (2021-07-21T22:47:24Z) - Cross-Domain Similarity Learning for Face Recognition in Unseen Domains [90.35908506994365]
We introduce a novel cross-domain metric learning loss, which we dub Cross-Domain Triplet (CDT) loss, to improve face recognition in unseen domains.
The CDT loss encourages learning semantically meaningful features by enforcing compact feature clusters of identities from one domain.
Our method does not require careful hard-pair sample mining and filtering strategy during training.
arXiv Detail & Related papers (2021-03-12T19:48:01Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - On the uncertainty of self-supervised monocular depth estimation [52.13311094743952]
Self-supervised paradigms for monocular depth estimation are very appealing since they do not require ground truth annotations at all.
We explore for the first time how to estimate the uncertainty for this task and how this affects depth accuracy.
We propose a novel peculiar technique specifically designed for self-supervised approaches.
arXiv Detail & Related papers (2020-05-13T09:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.