Explainable and Interpretable Diabetic Retinopathy Classification Based
on Neural-Symbolic Learning
- URL: http://arxiv.org/abs/2204.00624v1
- Date: Fri, 1 Apr 2022 00:54:12 GMT
- Title: Explainable and Interpretable Diabetic Retinopathy Classification Based
on Neural-Symbolic Learning
- Authors: Se-In Jang, Michael J.A. Girard and Alexandre H. Thiery
- Abstract summary: We propose an explainable and interpretable diabetic retinopathy (ExplainDR) classification model based on neural-symbolic learning.
We introduce a human-readable symbolic representation, which follows a taxonomy style of diabetic retinopathy characteristics related to eye health conditions to achieve explainability.
- Score: 71.76441043692984
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we propose an explainable and interpretable diabetic
retinopathy (ExplainDR) classification model based on neural-symbolic learning.
To gain explainability, a highlevel symbolic representation should be
considered in decision making. Specifically, we introduce a human-readable
symbolic representation, which follows a taxonomy style of diabetic retinopathy
characteristics related to eye health conditions to achieve explainability. We
then include humanreadable features obtained from the symbolic representation
in the disease prediction. Experimental results on a diabetic retinopathy
classification dataset show that our proposed ExplainDR method exhibits
promising performance when compared to that from state-of-the-art methods
applied to the IDRiD dataset, while also providing interpretability and
explainability.
Related papers
- Neuro-Symbolic AI: Explainability, Challenges, and Future Trends [26.656105779121308]
This article proposes a classification for explainability by considering both model design and behavior of 191 studies from 2013.
We classify them into five categories by considering whether the form of bridging the representation differences is readable.
We put forward suggestions for future research in three aspects: unified representations, enhancing model explainability, ethical considerations, and social impact.
arXiv Detail & Related papers (2024-11-07T02:54:35Z) - Looking into Concept Explanation Methods for Diabetic Retinopathy Classification [0.0]
It is impossible to screen all individuals with diabetes for diabetic retinopathy using fundus imaging.
Deep learning has shown impressive results for automatic analysis and grading of fundus images.
Explainable artificial intelligence methods can be applied to explain the deep neural networks.
arXiv Detail & Related papers (2024-10-04T07:01:37Z) - Semi-Supervised Graph Representation Learning with Human-centric
Explanation for Predicting Fatty Liver Disease [2.992602379681373]
This study explores the potential of graph representation learning within a semi-supervised learning framework.
Our approach constructs a subject similarity graph to identify risk patterns from health checkup data.
arXiv Detail & Related papers (2024-03-05T08:59:45Z) - Signature Activation: A Sparse Signal View for Holistic Saliency [18.699129959911485]
We introduce Signature Activation, a saliency method that generates holistic and class-agnostic explanations for CNN outputs.
Our method exploits the fact that certain kinds of medical images, such as angiograms, have clear foreground and background objects.
We show the potential use of our method in clinical settings through evaluating its efficacy for aiding the detection of lesions in coronary angiograms.
arXiv Detail & Related papers (2023-09-20T16:17:26Z) - Seeing in Words: Learning to Classify through Language Bottlenecks [59.97827889540685]
Humans can explain their predictions using succinct and intuitive descriptions.
We show that a vision model whose feature representations are text can effectively classify ImageNet images.
arXiv Detail & Related papers (2023-06-29T00:24:42Z) - NeuroExplainer: Fine-Grained Attention Decoding to Uncover Cortical
Development Patterns of Preterm Infants [73.85768093666582]
We propose an explainable geometric deep network dubbed NeuroExplainer.
NeuroExplainer is used to uncover altered infant cortical development patterns associated with preterm birth.
arXiv Detail & Related papers (2023-01-01T12:48:12Z) - Discriminative Attribution from Counterfactuals [64.94009515033984]
We present a method for neural network interpretability by combining feature attribution with counterfactual explanations.
We show that this method can be used to quantitatively evaluate the performance of feature attribution methods in an objective manner.
arXiv Detail & Related papers (2021-09-28T00:53:34Z) - Explainable Diabetic Retinopathy Detection and Retinal Image Generation [16.140110713539023]
We propose to exploit the interpretability of deep learning application in medical diagnosis.
By determining and isolating the neuron activation patterns on which diabetic retinopathy detector relies to make decisions, we demonstrate the direct relation between the isolated neuron activation and lesions for a pathological explanation.
To visualize the symptom encoded in the descriptor, we propose Patho-GAN, a new network to synthesize medically plausible retinal images.
arXiv Detail & Related papers (2021-07-01T08:30:04Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.