A review of uncertainty quantification in medical image analysis:
probabilistic and non-probabilistic methods
- URL: http://arxiv.org/abs/2310.06873v1
- Date: Mon, 9 Oct 2023 10:15:48 GMT
- Title: A review of uncertainty quantification in medical image analysis:
probabilistic and non-probabilistic methods
- Authors: Ling Huang, Su Ruan, Yucheng Xing, Mengling Feng
- Abstract summary: Uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models.
This review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
- Score: 11.972374203751562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The comprehensive integration of machine learning healthcare models within
clinical practice remains suboptimal, notwithstanding the proliferation of
high-performing solutions reported in the literature. A predominant factor
hindering widespread adoption pertains to an insufficiency of evidence
affirming the reliability of the aforementioned models. Recently, uncertainty
quantification methods have been proposed as a potential solution to quantify
the reliability of machine learning models and thus increase the
interpretability and acceptability of the result. In this review, we offer a
comprehensive overview of prevailing methods proposed to quantify uncertainty
inherent in machine learning models developed for various medical image tasks.
Contrary to earlier reviews that exclusively focused on probabilistic methods,
this review also explores non-probabilistic approaches, thereby furnishing a
more holistic survey of research pertaining to uncertainty quantification for
machine learning models. Analysis of medical images with the summary and
discussion on medical applications and the corresponding uncertainty evaluation
protocols are presented, which focus on the specific challenges of uncertainty
in medical image analysis. We also highlight some potential future research
work at the end. Generally, this review aims to allow researchers from both
clinical and technical backgrounds to gain a quick and yet in-depth
understanding of the research in uncertainty quantification for medical image
analysis machine learning models.
Related papers
- Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Seeing Unseen: Discover Novel Biomedical Concepts via
Geometry-Constrained Probabilistic Modeling [53.7117640028211]
We present a geometry-constrained probabilistic modeling treatment to resolve the identified issues.
We incorporate a suite of critical geometric properties to impose proper constraints on the layout of constructed embedding space.
A spectral graph-theoretic method is devised to estimate the number of potential novel classes.
arXiv Detail & Related papers (2024-03-02T00:56:05Z) - Benchmarking Scalable Epistemic Uncertainty Quantification in Organ
Segmentation [7.313010190714819]
quantifying uncertainty associated with model predictions is crucial in critical clinical applications.
Deep learning based methods for automatic organ segmentation have shown promise in aiding diagnosis and treatment planning.
It is unclear which method is preferred in the medical image analysis setting.
arXiv Detail & Related papers (2023-08-15T00:09:33Z) - Validating polyp and instrument segmentation methods in colonoscopy through Medico 2020 and MedAI 2021 Challenges [58.32937972322058]
"Medico automatic polyp segmentation (Medico 2020)" and "MedAI: Transparency in Medical Image (MedAI 2021)" competitions.
We present a comprehensive summary and analyze each contribution, highlight the strength of the best-performing methods, and discuss the possibility of clinical translations of such methods into the clinic.
arXiv Detail & Related papers (2023-07-30T16:08:45Z) - Beyond Known Reality: Exploiting Counterfactual Explanations for Medical
Research [1.6574413179773761]
Our study uses counterfactual explanations to explore the applicability of "what if?" scenarios in medical research.
Our aim is to expand our understanding of magnetic resonance imaging (MRI) features used for diagnosing pediatric posterior fossa brain tumors.
arXiv Detail & Related papers (2023-07-05T09:14:09Z) - A Review of Uncertainty Estimation and its Application in Medical
Imaging [32.860577735207094]
Uncertainty estimation plays a pivotal role in producing a confidence evaluation along with the prediction of the deep model.
This is particularly important in medical imaging, where the uncertainty in the model's predictions can be used to identify areas of concern or to provide additional information to the clinician.
arXiv Detail & Related papers (2023-02-16T06:54:33Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - Trustworthy clinical AI solutions: a unified review of uncertainty
quantification in deep learning models for medical image analysis [1.0439136407307046]
We propose an overview of the existing methods to quantify uncertainty associated to Deep Learning predictions.
We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability.
arXiv Detail & Related papers (2022-10-05T07:01:06Z) - Improving Trustworthiness of AI Disease Severity Rating in Medical
Imaging with Ordinal Conformal Prediction Sets [0.7734726150561088]
A lack of statistically rigorous uncertainty quantification is a significant factor undermining trust in AI results.
Recent developments in distribution-free uncertainty quantification present practical solutions for these issues.
We demonstrate a technique for forming ordinal prediction sets that are guaranteed to contain the correct stenosis severity.
arXiv Detail & Related papers (2022-07-05T18:01:20Z) - Benchmarking Heterogeneous Treatment Effect Models through the Lens of
Interpretability [82.29775890542967]
Estimating personalized effects of treatments is a complex, yet pervasive problem.
Recent developments in the machine learning literature on heterogeneous treatment effect estimation gave rise to many sophisticated, but opaque, tools.
We use post-hoc feature importance methods to identify features that influence the model's predictions.
arXiv Detail & Related papers (2022-06-16T17:59:05Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.