Calibration of Model Uncertainty for Dropout Variational Inference
- URL: http://arxiv.org/abs/2006.11584v1
- Date: Sat, 20 Jun 2020 14:12:55 GMT
- Title: Calibration of Model Uncertainty for Dropout Variational Inference
- Authors: Max-Heinrich Laves, Sontje Ihler, Karl-Philipp Kortmann, Tobias
Ortmaier
- Abstract summary: In this paper, different logit scaling methods are extended to dropout variational inference to recalibrate model uncertainty.
Experimental results show that logit scaling considerably reduce miscalibration by means of UCE.
- Score: 1.8065361710947976
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The model uncertainty obtained by variational Bayesian inference with Monte
Carlo dropout is prone to miscalibration. In this paper, different logit
scaling methods are extended to dropout variational inference to recalibrate
model uncertainty. Expected uncertainty calibration error (UCE) is presented as
a metric to measure miscalibration. The effectiveness of recalibration is
evaluated on CIFAR-10/100 and SVHN for recent CNN architectures. Experimental
results show that logit scaling considerably reduce miscalibration by means of
UCE. Well-calibrated uncertainty enables reliable rejection of uncertain
predictions and robust detection of out-of-distribution data.
Related papers
- Calibration by Distribution Matching: Trainable Kernel Calibration
Metrics [56.629245030893685]
We introduce kernel-based calibration metrics that unify and generalize popular forms of calibration for both classification and regression.
These metrics admit differentiable sample estimates, making it easy to incorporate a calibration objective into empirical risk minimization.
We provide intuitive mechanisms to tailor calibration metrics to a decision task, and enforce accurate loss estimation and no regret decisions.
arXiv Detail & Related papers (2023-10-31T06:19:40Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging [2.126171264016785]
Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.
$ sigma $ scaling is able to reliably recalibrate predictive uncertainty.
arXiv Detail & Related papers (2021-04-26T07:18:58Z) - Improving model calibration with accuracy versus uncertainty
optimization [17.056768055368384]
A well-calibrated model should be accurate when it is certain about its prediction and indicate high uncertainty when it is likely to be inaccurate.
We propose an optimization method that leverages the relationship between accuracy and uncertainty as an anchor for uncertainty calibration.
We demonstrate our approach with mean-field variational inference and compare with state-of-the-art methods.
arXiv Detail & Related papers (2020-12-14T20:19:21Z) - CRUDE: Calibrating Regression Uncertainty Distributions Empirically [4.552831400384914]
Calibrated uncertainty estimates in machine learning are crucial to many fields such as autonomous vehicles, medicine, and weather and climate forecasting.
We present a calibration method for regression settings that does not assume a particular uncertainty distribution over the error: Calibrating Regression Uncertainty Distributions Empirically (CRUDE)
CRUDE demonstrates consistently sharper, better calibrated, and more accurate uncertainty estimates than state-of-the-art techniques.
arXiv Detail & Related papers (2020-05-26T03:08:43Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.