A Review of Uncertainty Estimation and its Application in Medical
Imaging
- URL: http://arxiv.org/abs/2302.08119v3
- Date: Tue, 16 May 2023 03:41:29 GMT
- Title: A Review of Uncertainty Estimation and its Application in Medical
Imaging
- Authors: Ke Zou and Zhihao Chen and Xuedong Yuan and Xiaojing Shen and Meng
Wang and Huazhu Fu
- Abstract summary: Uncertainty estimation plays a pivotal role in producing a confidence evaluation along with the prediction of the deep model.
This is particularly important in medical imaging, where the uncertainty in the model's predictions can be used to identify areas of concern or to provide additional information to the clinician.
- Score: 32.860577735207094
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The use of AI systems in healthcare for the early screening of diseases is of
great clinical importance. Deep learning has shown great promise in medical
imaging, but the reliability and trustworthiness of AI systems limit their
deployment in real clinical scenes, where patient safety is at stake.
Uncertainty estimation plays a pivotal role in producing a confidence
evaluation along with the prediction of the deep model. This is particularly
important in medical imaging, where the uncertainty in the model's predictions
can be used to identify areas of concern or to provide additional information
to the clinician. In this paper, we review the various types of uncertainty in
deep learning, including aleatoric uncertainty and epistemic uncertainty. We
further discuss how they can be estimated in medical imaging. More importantly,
we review recent advances in deep learning models that incorporate uncertainty
estimation in medical imaging. Finally, we discuss the challenges and future
directions in uncertainty estimation in deep learning for medical imaging. We
hope this review will ignite further interest in the community and provide
researchers with an up-to-date reference regarding applications of uncertainty
estimation models in medical imaging.
Related papers
- SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - A review of uncertainty quantification in medical image analysis:
probabilistic and non-probabilistic methods [11.972374203751562]
Uncertainty quantification methods have been proposed as a potential solution to quantify the reliability of machine learning models.
This review aims to allow researchers from both clinical and technical backgrounds to gain a quick and yet in-depth understanding of the research in uncertainty quantification for medical image analysis machine learning models.
arXiv Detail & Related papers (2023-10-09T10:15:48Z) - Informing clinical assessment by contextualizing post-hoc explanations
of risk prediction models in type-2 diabetes [50.8044927215346]
We consider a comorbidity risk prediction scenario and focus on contexts regarding the patients clinical state.
We employ several state-of-the-art LLMs to present contexts around risk prediction model inferences and evaluate their acceptability.
Our paper is one of the first end-to-end analyses identifying the feasibility and benefits of contextual explanations in a real-world clinical use case.
arXiv Detail & Related papers (2023-02-11T18:07:11Z) - A Trustworthy Framework for Medical Image Analysis with Deep Learning [71.48204494889505]
TRUDLMIA is a trustworthy deep learning framework for medical image analysis.
It is anticipated that the framework will support researchers and clinicians in advancing the use of deep learning for dealing with public health crises including COVID-19.
arXiv Detail & Related papers (2022-12-06T05:30:22Z) - Trustworthy clinical AI solutions: a unified review of uncertainty
quantification in deep learning models for medical image analysis [1.0439136407307046]
We propose an overview of the existing methods to quantify uncertainty associated to Deep Learning predictions.
We focus on applications to medical image analysis, which present specific challenges due to the high dimensionality of images and their quality variability.
arXiv Detail & Related papers (2022-10-05T07:01:06Z) - Boosting the interpretability of clinical risk scores with intervention
predictions [59.22442473992704]
We propose a joint model of intervention policy and adverse event risk as a means to explicitly communicate the model's assumptions about future interventions.
We show how combining typical risk scores, such as the likelihood of mortality, with future intervention probability scores leads to more interpretable clinical predictions.
arXiv Detail & Related papers (2022-07-06T19:49:42Z) - Joint Dermatological Lesion Classification and Confidence Modeling with
Uncertainty Estimation [23.817227116949958]
We propose an overall framework that jointly considers dermatological classification and uncertainty estimation together.
The estimated confidence of each feature to avoid uncertain feature and undesirable shift is pooled from confidence network.
We demonstrate the potential of the proposed approach in two state-of-the-art dermoscopic datasets.
arXiv Detail & Related papers (2021-07-19T11:54:37Z) - Clinical Outcome Prediction from Admission Notes using Self-Supervised
Knowledge Integration [55.88616573143478]
Outcome prediction from clinical text can prevent doctors from overlooking possible risks.
Diagnoses at discharge, procedures performed, in-hospital mortality and length-of-stay prediction are four common outcome prediction targets.
We propose clinical outcome pre-training to integrate knowledge about patient outcomes from multiple public sources.
arXiv Detail & Related papers (2021-02-08T10:26:44Z) - UNITE: Uncertainty-based Health Risk Prediction Leveraging Multi-sourced
Data [81.00385374948125]
We present UNcertaInTy-based hEalth risk prediction (UNITE) model.
UNITE provides accurate disease risk prediction and uncertainty estimation leveraging multi-sourced health data.
We evaluate UNITE on real-world disease risk prediction tasks: nonalcoholic fatty liver disease (NASH) and Alzheimer's disease (AD)
UNITE achieves up to 0.841 in F1 score for AD detection, up to 0.609 in PR-AUC for NASH detection, and outperforms various state-of-the-art baselines by up to $19%$ over the best baseline.
arXiv Detail & Related papers (2020-10-22T02:28:11Z) - Estimating Uncertainty and Interpretability in Deep Learning for
Coronavirus (COVID-19) Detection [0.0]
Knowing how much confidence there is in a computer-based medical diagnosis is essential for gaining clinicians trust in the technology.
In this paper, we investigate how drop-weights based Bayesian Convolutional Neural Networks (BCNN) can estimate uncertainty in Deep Learning solution.
We believe that the availability of uncertainty-aware deep learning solution will enable a wider adoption of Artificial Intelligence (AI) in a clinical setting.
arXiv Detail & Related papers (2020-03-22T21:58:13Z) - Bayesian Modelling in Practice: Using Uncertainty to Improve Trustworthiness in Medical Applications [2.446672595462589]
The Intensive Care Unit (ICU) is a hospital department where machine learning has the potential to provide valuable assistance in clinical decision making.
In practice, uncertain predictions should be presented to doctors with extra care in order to prevent potentially catastrophic treatment decisions.
We show how Bayesian modelling and the predictive uncertainty that it provides can be used to mitigate risk of misguided prediction.
arXiv Detail & Related papers (2019-06-20T13:51:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.