Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging
- URL: http://arxiv.org/abs/2104.12376v1
- Date: Mon, 26 Apr 2021 07:18:58 GMT
- Title: Recalibration of Aleatoric and Epistemic Regression Uncertainty in
Medical Imaging
- Authors: Max-Heinrich Laves, Sontje Ihler, Jacob F. Fast, L\"uder A. Kahrs,
Tobias Ortmaier
- Abstract summary: Well-calibrated uncertainty in regression allows robust rejection of unreliable predictions or detection of out-of-distribution samples.
$ sigma $ scaling is able to reliably recalibrate predictive uncertainty.
- Score: 2.126171264016785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The consideration of predictive uncertainty in medical imaging with deep
learning is of utmost importance. We apply estimation of both aleatoric and
epistemic uncertainty by variational Bayesian inference with Monte Carlo
dropout to regression tasks and show that predictive uncertainty is
systematically underestimated. We apply $ \sigma $ scaling with a single scalar
value; a simple, yet effective calibration method for both types of
uncertainty. The performance of our approach is evaluated on a variety of
common medical regression data sets using different state-of-the-art
convolutional network architectures. In our experiments, $ \sigma $ scaling is
able to reliably recalibrate predictive uncertainty. It is easy to implement
and maintains the accuracy. Well-calibrated uncertainty in regression allows
robust rejection of unreliable predictions or detection of out-of-distribution
samples. Our source code is available at
https://github.com/mlaves/well-calibrated-regression-uncertainty
Related papers
- Beyond the Norms: Detecting Prediction Errors in Regression Models [26.178065248948773]
This paper tackles the challenge of detecting unreliable behavior in regression algorithms.
We introduce the notion of unreliability in regression, when the output of the regressor exceeds a specified discrepancy (or error)
We show empirical improvements in error detection for multiple regression tasks, consistently outperforming popular baseline approaches.
arXiv Detail & Related papers (2024-06-11T05:51:44Z) - Selective Nonparametric Regression via Testing [54.20569354303575]
We develop an abstention procedure via testing the hypothesis on the value of the conditional variance at a given point.
Unlike existing methods, the proposed one allows to account not only for the value of the variance itself but also for the uncertainty of the corresponding variance predictor.
arXiv Detail & Related papers (2023-09-28T13:04:11Z) - Beta quantile regression for robust estimation of uncertainty in the
presence of outliers [1.6377726761463862]
Quantile Regression can be used to estimate aleatoric uncertainty in deep neural networks.
We propose a robust solution for quantile regression that incorporates concepts from robust divergence.
arXiv Detail & Related papers (2023-09-14T01:18:57Z) - How Reliable is Your Regression Model's Uncertainty Under Real-World
Distribution Shifts? [46.05502630457458]
We propose a benchmark of 8 image-based regression datasets with different types of challenging distribution shifts.
We find that while methods are well calibrated when there is no distribution shift, they all become highly overconfident on many of the benchmark datasets.
arXiv Detail & Related papers (2023-02-07T18:54:39Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - Learning Probabilistic Ordinal Embeddings for Uncertainty-Aware
Regression [91.3373131262391]
Uncertainty is the only certainty there is.
Traditionally, the direct regression formulation is considered and the uncertainty is modeled by modifying the output space to a certain family of probabilistic distributions.
How to model the uncertainty within the present-day technologies for regression remains an open issue.
arXiv Detail & Related papers (2021-03-25T06:56:09Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Calibrated Reliable Regression using Maximum Mean Discrepancy [45.45024203912822]
Modern deep neural networks still produce unreliable predictive uncertainty.
In this paper, we are concerned with getting well-calibrated predictions in regression tasks.
Experiments on non-trivial real datasets show that our method can produce well-calibrated and sharp prediction intervals.
arXiv Detail & Related papers (2020-06-18T03:38:12Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.