Trustworthy clinical AI solutions: a unified review of uncertainty
quantification in deep learning models for medical image analysis
- URL: http://arxiv.org/abs/2210.03736v1
- Date: Wed, 5 Oct 2022 07:01:06 GMT
- Title: Trustworthy clinical AI solutions: a unified review of uncertainty
quantification in deep learning models for medical image analysis
- Authors: Benjamin Lambert, Florence Forbes, Alan Tucholka, Senan Doyle,
Harmonie Dehaene and Michel Dojat
- Abstract summary: We propose an overview of the existing methods to quantify uncertainty associated to Deep Learning predictions.
We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability.
- Score: 1.0439136407307046
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The full acceptance of Deep Learning (DL) models in the clinical field is
rather low with respect to the quantity of high-performing solutions reported
in the literature. Particularly, end users are reluctant to rely on the rough
predictions of DL models. Uncertainty quantification methods have been proposed
in the literature as a potential response to reduce the rough decision provided
by the DL black box and thus increase the interpretability and the
acceptability of the result by the final user. In this review, we propose an
overview of the existing methods to quantify uncertainty associated to DL
predictions. We focus on applications to medical image analysis, which present
specific challenges due to the high dimensionality of images and their quality
variability, as well as constraints associated to real-life clinical routine.
We then discuss the evaluation protocols to validate the relevance of
uncertainty estimates. Finally, we highlight the open challenges of uncertainty
quantification in the medical field.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - A review of uncertainty quantification in medical image analysis:
probabilistic and non-probabilistic methods [11.972374203751562]
Uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models.
This review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
arXiv Detail & Related papers (2023-10-09T10:15:48Z) - Benchmarking Scalable Epistemic Uncertainty Quantification in Organ
Segmentation [7.313010190714819]
quantifying uncertainty associated with model predictions is crucial in critical clinical applications.
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning.
It is unclear which method is preferred in the medical image analysis setting.
arXiv Detail & Related papers (2023-08-15T00:09:33Z) - A Review of Uncertainty Estimation and its Application in Medical
Imaging [32.860577735207094]
Uncertainty estimation plays a pivotal role in producing a confidence evaluation along with the prediction of the deep model.
This is particularly important in medical imaging, where the uncertainty in the model's predictions can be used to identify areas of concern or to provide additional information to the clinician.
arXiv Detail & Related papers (2023-02-16T06:54:33Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Can uncertainty boost the reliability of AI-based diagnostic methods in
digital pathology? [3.8424737607413157]
We evaluate if adding uncertainty estimates for DL predictions in digital pathology could result in increased value for the clinical applications.
We compare the effectiveness of model-integrated methods (MC dropout and Deep ensembles) with a model-agnostic approach.
Our results show that uncertainty estimates can add some reliability and reduce sensitivity to classification threshold selection.
arXiv Detail & Related papers (2021-12-17T10:10:00Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.