Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions
in Medical Domain
- URL: http://arxiv.org/abs/2007.01659v4
- Date: Mon, 22 Mar 2021 06:23:53 GMT
- Title: Diagnostic Uncertainty Calibration: Towards Reliable Machine Predictions
in Medical Domain
- Authors: Takahiro Mimori, Keiko Sasada, Hirotaka Matsui, Issei Sato
- Abstract summary: We propose an evaluation framework for class probability estimates (CPEs) in the presence of label uncertainty.
We also formalize evaluation metrics for higher-order statistics, including inter-rater disagreement.
We show that our approach significantly enhances the reliability of uncertainty estimates.
- Score: 20.237847764018138
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose an evaluation framework for class probability estimates (CPEs) in
the presence of label uncertainty, which is commonly observed as diagnosis
disagreement between experts in the medical domain. We also formalize
evaluation metrics for higher-order statistics, including inter-rater
disagreement, to assess predictions on label uncertainty. Moreover, we propose
a novel post-hoc method called $alpha$-calibration, that equips neural network
classifiers with calibrated distributions over CPEs. Using synthetic
experiments and a large-scale medical imaging application, we show that our
approach significantly enhances the reliability of uncertainty estimates:
disagreement probabilities and posterior CPEs.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - EDUE: Expert Disagreement-Guided One-Pass Uncertainty Estimation for Medical Image Segmentation [1.757276115858037]
This paper proposes an Expert Disagreement-Guided Uncertainty Estimation (EDUE) for medical image segmentation.
By leveraging variability in ground-truth annotations from multiple raters, we guide the model during training and incorporate random sampling-based strategies to enhance calibration confidence.
arXiv Detail & Related papers (2024-03-25T10:13:52Z) - Uncertainty Quantification in Machine Learning Based Segmentation: A
Post-Hoc Approach for Left Ventricle Volume Estimation in MRI [0.0]
Left ventricular (LV) volume estimation is critical for valid diagnosis and management of various cardiovascular conditions.
Recent machine learning advancements, particularly U-Net-like convolutional networks, have facilitated automated segmentation for medical images.
This study proposes a novel methodology for post-hoc uncertainty estimation in LV volume prediction.
arXiv Detail & Related papers (2023-10-30T13:44:55Z) - Benchmarking Scalable Epistemic Uncertainty Quantification in Organ
Segmentation [7.313010190714819]
quantifying uncertainty associated with model predictions is crucial in critical clinical applications.
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning.
It is unclear which method is preferred in the medical image analysis setting.
arXiv Detail & Related papers (2023-08-15T00:09:33Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection [78.24964622317634]
In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
arXiv Detail & Related papers (2022-02-25T12:20:04Z) - Can uncertainty boost the reliability of AI-based diagnostic methods in
digital pathology? [3.8424737607413157]
We evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications.
We compare the effectiveness of model-integrated methods (MC dropout and Deep ensembles) with a model-agnostic approach.
Our results show that uncertainty estimates can add some reliability and reduce sensitivity to classification threshold selection.
arXiv Detail & Related papers (2021-12-17T10:10:00Z) - DEUP: Direct Epistemic Uncertainty Prediction [56.087230230128185]
Epistemic uncertainty is part of out-of-sample prediction error due to the lack of knowledge of the learner.
We propose a principled approach for directly estimating epistemic uncertainty by learning to predict generalization error and subtracting an estimate of aleatoric uncertainty.
arXiv Detail & Related papers (2021-02-16T23:50:35Z) - Bayesian Uncertainty Estimation of Learned Variational MRI
Reconstruction [63.202627467245584]
We introduce a Bayesian variational framework to quantify the model-immanent (epistemic) uncertainty.
We demonstrate that our approach yields competitive results for undersampled MRI reconstruction.
arXiv Detail & Related papers (2021-02-12T18:08:14Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.