Uncertainty Estimation for Heatmap-based Landmark Localization
- URL: http://arxiv.org/abs/2203.02351v1
- Date: Fri, 4 Mar 2022 14:40:44 GMT
- Title: Uncertainty Estimation for Heatmap-based Landmark Localization
- Authors: Lawrence Schobs, Andrew J. Swift, Haiping Lu
- Abstract summary: We propose Quantile Binning, a data-driven method to categorise predictions by uncertainty with estimated error bounds.
We demonstrate this framework by comparing and contrasting three uncertainty measures.
We conclude by illustrating how filtering out gross mispredictions caught in our Quantile Bins significantly improves the proportion of predictions under an acceptable error threshold.
- Score: 4.673063715963989
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic anatomical landmark localization has made great strides by
leveraging deep learning methods in recent years. The ability to quantify the
uncertainty of these predictions is a vital ingredient needed to see these
methods adopted in clinical use, where it is imperative that erroneous
predictions are caught and corrected. We propose Quantile Binning, a
data-driven method to categorise predictions by uncertainty with estimated
error bounds. This framework can be applied to any continuous uncertainty
measure, allowing straightforward identification of the best subset of
predictions with accompanying estimated error bounds. We facilitate easy
comparison between uncertainty measures by constructing two evaluation metrics
derived from Quantile Binning. We demonstrate this framework by comparing and
contrasting three uncertainty measures (a baseline, the current gold standard,
and a proposed method combining aspects of the two), across two datasets (one
easy, one hard) and two heatmap-based landmark localization model paradigms
(U-Net and patch-based). We conclude by illustrating how filtering out gross
mispredictions caught in our Quantile Bins significantly improves the
proportion of predictions under an acceptable error threshold, and offer
recommendations on which uncertainty measure to use and how to use it.
Related papers
- Doubly Calibrated Estimator for Recommendation on Data Missing Not At
Random [20.889464448762176]
We argue that existing estimators rely on miscalibrated imputed errors and propensity scores.
We propose a Doubly Calibrated Estimator that involves the calibration of both the imputation and propensity models.
arXiv Detail & Related papers (2024-02-26T05:08:52Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - On double-descent in uncertainty quantification in overparametrized
models [24.073221004661427]
Uncertainty quantification is a central challenge in reliable and trustworthy machine learning.
We show a trade-off between classification accuracy and calibration, unveiling a double descent like behavior in the calibration curve of optimally regularized estimators.
This is in contrast with the empirical Bayes method, which we show to be well calibrated in our setting despite the higher generalization error and overparametrization.
arXiv Detail & Related papers (2022-10-23T16:01:08Z) - On Calibrated Model Uncertainty in Deep Learning [0.0]
We extend the approximate inference for the loss-calibrated Bayesian framework to dropweights based Bayesian neural networks.
We show that decisions informed by loss-calibrated uncertainty can improve diagnostic performance to a greater extent than straightforward alternatives.
arXiv Detail & Related papers (2022-06-15T20:16:32Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - The Aleatoric Uncertainty Estimation Using a Separate Formulation with
Virtual Residuals [51.71066839337174]
Existing methods can quantify the error in the target estimation, but they tend to underestimate it.
We propose a new separable formulation for the estimation of a signal and of its uncertainty, avoiding the effect of overfitting.
We demonstrate that the proposed method outperforms a state-of-the-art technique for signal and uncertainty estimation.
arXiv Detail & Related papers (2020-11-03T12:11:27Z) - Probabilistic Deep Learning for Instance Segmentation [9.62543698736491]
We propose a generic method to obtain model-inherent uncertainty estimates within proposal-free instance segmentation models.
We evaluate our method on the BBBC010 C. elegans dataset, where it yields competitive performance.
arXiv Detail & Related papers (2020-08-24T19:51:48Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Efficient Ensemble Model Generation for Uncertainty Estimation with
Bayesian Approximation in Segmentation [74.06904875527556]
We propose a generic and efficient segmentation framework to construct ensemble segmentation models.
In the proposed method, ensemble models can be efficiently generated by using the layer selection method.
We also devise a new pixel-wise uncertainty loss, which improves the predictive performance.
arXiv Detail & Related papers (2020-05-21T16:08:38Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.