Uncertainty aware and explainable diagnosis of retinal disease
- URL: http://arxiv.org/abs/2101.12041v1
- Date: Tue, 26 Jan 2021 23:37:30 GMT
- Title: Uncertainty aware and explainable diagnosis of retinal disease
- Authors: Amitojdeep Singh, Sourya Sengupta, Mohammed Abdul Rasheed,
Varadharajan Jayakumar, and Vasudevan Lakshminarayanan
- Abstract summary: We perform uncertainty analysis of a deep learning model for diagnosis of four retinal diseases.
We show the features that a system used to make prediction while uncertainty awareness is the ability of a system to highlight when it is not sure about the decision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning methods for ophthalmic diagnosis have shown considerable
success in tasks like segmentation and classification. However, their
widespread application is limited due to the models being opaque and vulnerable
to making a wrong decision in complicated cases. Explainability methods show
the features that a system used to make prediction while uncertainty awareness
is the ability of a system to highlight when it is not sure about the decision.
This is one of the first studies using uncertainty and explanations for
informed clinical decision making. We perform uncertainty analysis of a deep
learning model for diagnosis of four retinal diseases - age-related macular
degeneration (AMD), central serous retinopathy (CSR), diabetic retinopathy
(DR), and macular hole (MH) using images from a publicly available (OCTID)
dataset. Monte Carlo (MC) dropout is used at the test time to generate a
distribution of parameters and the predictions approximate the predictive
posterior of a Bayesian model. A threshold is computed using the distribution
and uncertain cases can be referred to the ophthalmologist thus avoiding an
erroneous diagnosis. The features learned by the model are visualized using a
proven attribution method from a previous study. The effects of uncertainty on
model performance and the relationship between uncertainty and explainability
are discussed in terms of clinical significance. The uncertainty information
along with the heatmaps make the system more trustworthy for use in clinical
settings.
Related papers
- Predictive uncertainty estimation in deep learning for lung carcinoma classification in digital pathology under real dataset shifts [2.309018557701645]
This paper evaluates whether predictive uncertainty estimation adds robustness to deep learning-based diagnostic decision-making systems.
We first investigate three popular methods for improving predictive uncertainty: Monte Carlo dropout, deep ensemble, and few-shot learning on lung adenocarcinoma classification as a primary disease in whole slide images.
arXiv Detail & Related papers (2024-08-15T21:49:43Z) - SepsisLab: Early Sepsis Prediction with Uncertainty Quantification and Active Sensing [67.8991481023825]
Sepsis is the leading cause of in-hospital mortality in the USA.
Existing predictive models are usually trained on high-quality data with few missing information.
For the potential high-risk patients with low confidence due to limited observations, we propose a robust active sensing algorithm.
arXiv Detail & Related papers (2024-07-24T04:47:36Z) - Decoding Decision Reasoning: A Counterfactual-Powered Model for Knowledge Discovery [6.1521675665532545]
In medical imaging, discerning the rationale behind an AI model's predictions is crucial for evaluating its reliability.
We propose an explainable model that is equipped with both decision reasoning and feature identification capabilities.
By implementing our method, we can efficiently identify and visualise class-specific features leveraged by the data-driven model.
arXiv Detail & Related papers (2024-05-23T19:00:38Z) - Unified Uncertainty Estimation for Cognitive Diagnosis Models [70.46998436898205]
We propose a unified uncertainty estimation approach for a wide range of cognitive diagnosis models.
We decompose the uncertainty of diagnostic parameters into data aspect and model aspect.
Our method is effective and can provide useful insights into the uncertainty of cognitive diagnosis.
arXiv Detail & Related papers (2024-03-09T13:48:20Z) - Enhance Eye Disease Detection using Learnable Probabilistic Discrete Latents in Machine Learning Architectures [1.6000489723889526]
Ocular diseases, including diabetic retinopathy and glaucoma, present a significant public health challenge.
Deep learning models have emerged as powerful tools for analysing medical images, such as retina imaging.
Challenges persist in model relibability and uncertainty estimation, which are critical for clinical decision-making.
arXiv Detail & Related papers (2024-01-21T04:14:54Z) - Uncertainty Quantification in Machine Learning Based Segmentation: A
Post-Hoc Approach for Left Ventricle Volume Estimation in MRI [0.0]
Left ventricular (LV) volume estimation is critical for valid diagnosis and management of various cardiovascular conditions.
Recent machine learning advancements, particularly U-Net-like convolutional networks, have facilitated automated segmentation for medical images.
This study proposes a novel methodology for post-hoc uncertainty estimation in LV volume prediction.
arXiv Detail & Related papers (2023-10-30T13:44:55Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - Interpretable Vertebral Fracture Diagnosis [69.68641439851777]
Black-box neural network models learn clinically relevant features for fracture diagnosis.
This work identifies the concepts networks use for vertebral fracture diagnosis in CT images.
arXiv Detail & Related papers (2022-03-30T13:07:41Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.