Which models are innately best at uncertainty estimation?
- URL: http://arxiv.org/abs/2206.02152v1
- Date: Sun, 5 Jun 2022 11:15:35 GMT
- Title: Which models are innately best at uncertainty estimation?
- Authors: Ido Galil, Mohammed Dabbah, Ran El-Yaniv
- Abstract summary: Deep neural networks must be equipped with an uncertainty estimation mechanism when deployed for risk-sensitive tasks.
This paper studies the relationship between deep architectures and their training regimes with their corresponding selective prediction and uncertainty estimation performance.
- Score: 15.929238800072195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks must be equipped with an uncertainty estimation
mechanism when deployed for risk-sensitive tasks. This paper studies the
relationship between deep architectures and their training regimes with their
corresponding selective prediction and uncertainty estimation performance. We
consider both in-distribution uncertainties and class-out-of-distribution ones.
Moreover, we consider some of the most popular estimation performance metrics
previously proposed including AUROC, ECE, AURC, and coverage for selective
accuracy constraint. We present a novel and comprehensive study of selective
prediction and the uncertainty estimation performance of 484 existing
pretrained deep ImageNet classifiers that are available at popular
repositories. We identify numerous and previously unknown factors that affect
uncertainty estimation and examine the relationships between the different
metrics. We find that distillation-based training regimes consistently yield
better uncertainty estimations than other training schemes such as vanilla
training, pretraining on a larger dataset and adversarial training. We also
provide strong empirical evidence showing that ViT is by far the most superior
architecture in terms of uncertainty estimation performance, judging by any
aspect, in both in-distribution and class-out-of-distribution scenarios.
Related papers
- Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Uncertainty Estimation by Fisher Information-based Evidential Deep
Learning [61.94125052118442]
Uncertainty estimation is a key factor that makes deep learning reliable in practical applications.
We propose a novel method, Fisher Information-based Evidential Deep Learning ($mathcalI$-EDL)
In particular, we introduce Fisher Information Matrix (FIM) to measure the informativeness of evidence carried by each sample, according to which we can dynamically reweight the objective loss terms to make the network more focused on the representation learning of uncertain classes.
arXiv Detail & Related papers (2023-03-03T16:12:59Z) - What Can We Learn From The Selective Prediction And Uncertainty
Estimation Performance Of 523 Imagenet Classifiers [15.929238800072195]
We present a novel study of selective prediction and the uncertainty estimation performance of 523 existing pretrained deep ImageNet classifiers.
We find that distillation-based training regimes consistently yield better uncertainty estimations than other training schemes.
For example, we discovered an unprecedented 99% top-1 selective accuracy on ImageNet at 47% coverage.
arXiv Detail & Related papers (2023-02-23T09:25:28Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Pitfalls of In-Domain Uncertainty Estimation and Ensembling in Deep
Learning [70.72363097550483]
In this study, we focus on in-domain uncertainty for image classification.
To provide more insight in this study, we introduce the deep ensemble equivalent score (DEE)
arXiv Detail & Related papers (2020-02-15T23:28:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.