EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation
- URL: http://arxiv.org/abs/2403.16594v1
- Date: Mon, 25 Mar 2024 10:13:52 GMT
- Title: EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation
- Authors: Kudaibergen Abutalip, Numan Saeed, Ikboljon Sobirov, Vincent Andrearczyk, Adrien Depeursinge, Mohammad Yaqub,
- Abstract summary: This paper proposes an Expert Disagreement-Guided Uncertainty Estimation (EDUE) for medical image segmentation.
By leveraging variability in ground-truth annotations from multiple raters, we guide the model during training and incorporate random sampling-based strategies to enhance calibration confidence.
- Score: 1.757276115858037
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deploying deep learning (DL) models in medical applications relies on predictive performance and other critical factors, such as conveying trustworthy predictive uncertainty. Uncertainty estimation (UE) methods provide potential solutions for evaluating prediction reliability and improving the model confidence calibration. Despite increasing interest in UE, challenges persist, such as the need for explicit methods to capture aleatoric uncertainty and align uncertainty estimates with real-life disagreements among domain experts. This paper proposes an Expert Disagreement-Guided Uncertainty Estimation (EDUE) for medical image segmentation. By leveraging variability in ground-truth annotations from multiple raters, we guide the model during training and incorporate random sampling-based strategies to enhance calibration confidence. Our method achieves 55% and 23% improvement in correlation on average with expert disagreements at the image and pixel levels, respectively, better calibration, and competitive segmentation performance compared to the state-of-the-art deep ensembles, requiring only a single forward pass.
Related papers
- Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - Improving Robustness and Reliability in Medical Image Classification with Latent-Guided Diffusion and Nested-Ensembles [4.249986624493547]
Ensemble deep learning has been shown to achieve high predictive accuracy and uncertainty estimation.
perturbations in the input images at test time can still lead to significant performance degradation.
LaDiNE is a novel and robust probabilistic method that is capable of inferring informative and invariant latent variables from the input images.
arXiv Detail & Related papers (2023-10-24T15:53:07Z) - Binary Classification with Confidence Difference [100.08818204756093]
This paper delves into a novel weakly supervised binary classification problem called confidence-difference (ConfDiff) classification.
We propose a risk-consistent approach to tackle this problem and show that the estimation error bound the optimal convergence rate.
We also introduce a risk correction approach to mitigate overfitting problems, whose consistency and convergence rate are also proven.
arXiv Detail & Related papers (2023-10-09T11:44:50Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - EVIL: Evidential Inference Learning for Trustworthy Semi-supervised
Medical Image Segmentation [7.58442624111591]
We introduce Evidential Inference Learning (EVIL) into semi-supervised medical image segmentation.
EVIL provides a theoretically guaranteed solution to infer accurate quantification uncertainty in a single forward pass.
We show that EVIL achieves competitive performance in comparison with several state-of-the-art methods on the public dataset.
arXiv Detail & Related papers (2023-07-18T05:59:27Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Reliability-Aware Prediction via Uncertainty Learning for Person Image
Retrieval [51.83967175585896]
UAL aims at providing reliability-aware predictions by considering data uncertainty and model uncertainty simultaneously.
Data uncertainty captures the noise" inherent in the sample, while model uncertainty depicts the model's confidence in the sample's prediction.
arXiv Detail & Related papers (2022-10-24T17:53:20Z) - Uncertainty-Aware Training for Cardiac Resynchronisation Therapy
Response Prediction [3.090173647095682]
Quantifying uncertainty of a prediction is one way to provide such interpretability and promote trust.
We quantify the data (aleatoric) and model (epistemic) uncertainty of a DL model for Cardiac Resynchronisation Therapy response prediction from cardiac magnetic resonance images.
We perform a preliminary investigation of an uncertainty-aware loss function that can be used to retrain an existing DL image-based classification model to encourage confidence in correct predictions.
arXiv Detail & Related papers (2021-09-22T10:37:50Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions
in Medical Domain [20.237847764018138]
We propose an evaluation framework for class probability estimates (CPEs) in the presence of label uncertainty.
We also formalize evaluation metrics for higher-order statistics, including inter-rater disagreement.
We show that our approach significantly enhances the reliability of uncertainty estimates.
arXiv Detail & Related papers (2020-07-03T12:54:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.