Explainable Deep Image Classifiers for Skin Lesion Diagnosis
- URL: http://arxiv.org/abs/2111.11863v1
- Date: Mon, 22 Nov 2021 10:42:20 GMT
- Title: Explainable Deep Image Classifiers for Skin Lesion Diagnosis
- Authors: Carlo Metta, Andrea Beretta, Riccardo Guidotti, Yuan Yin, Patrick
Gallinari, Salvatore Rinzivillo, Fosca Giannotti
- Abstract summary: Key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems.
In this paper, we analyze a case study on skin lesion images where we customize an existing XAI approach for explaining a deep learning model able to recognize different types of skin lesions.
- Score: 16.483826925814522
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key issue in critical contexts such as medical diagnosis is the
interpretability of the deep learning models adopted in decision-making
systems. Research in eXplainable Artificial Intelligence (XAI) is trying to
solve this issue. However, often XAI approaches are only tested on generalist
classifier and do not represent realistic problems such as those of medical
diagnosis. In this paper, we analyze a case study on skin lesion images where
we customize an existing XAI approach for explaining a deep learning model able
to recognize different types of skin lesions. The explanation is formed by
synthetic exemplar and counter-exemplar images of skin lesion and offers the
practitioner a way to highlight the crucial traits responsible for the
classification decision. A survey conducted with domain experts, beginners and
unskilled people proof that the usage of explanations increases the trust and
confidence in the automatic decision system. Also, an analysis of the latent
space adopted by the explainer unveils that some of the most frequent skin
lesion classes are distinctly separated. This phenomenon could derive from the
intrinsic characteristics of each class and, hopefully, can provide support in
the resolution of the most frequent misclassifications by human experts.
Related papers
- Aligning Human Knowledge with Visual Concepts Towards Explainable Medical Image Classification [8.382606243533942]
We introduce a simple yet effective framework, Explicd, towards Explainable language-informed criteria-based diagnosis.
By leveraging a pretrained vision-language model, Explicd injects these criteria into the embedding space as knowledge anchors.
The final diagnostic outcome is determined based on the similarity scores between the encoded visual concepts and the textual criteria embeddings.
arXiv Detail & Related papers (2024-06-08T23:23:28Z) - Optimizing Skin Lesion Classification via Multimodal Data and Auxiliary
Task Integration [54.76511683427566]
This research introduces a novel multimodal method for classifying skin lesions, integrating smartphone-captured images with essential clinical and demographic information.
A distinctive aspect of this method is the integration of an auxiliary task focused on super-resolution image prediction.
The experimental evaluations have been conducted using the PAD-UFES20 dataset, applying various deep-learning architectures.
arXiv Detail & Related papers (2024-02-16T05:16:20Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - Exemplars and Counterexemplars Explanations for Image Classifiers,
Targeting Skin Lesion Labeling [26.17582232842832]
Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans.
This is particularly important in sensitive contexts like in the medical domain.
We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations.
arXiv Detail & Related papers (2023-01-18T11:14:42Z) - Revisiting the Shape-Bias of Deep Learning for Dermoscopic Skin Lesion
Classification [4.414962444402826]
It is generally believed that the human visual system is biased towards the recognition of shapes rather than textures.
In this paper, we revisit the significance of shape-biases for the classification of skin lesion images.
arXiv Detail & Related papers (2022-06-13T20:59:06Z) - Explainable Deep Learning Methods in Medical Image Classification: A
Survey [0.0]
State-of-the-art deep learning models have achieved human-level accuracy on the classification of different types of medical data.
These models are hardly adopted in clinical, mainly due to their lack of interpretability.
The black-box-ness of deep learning models has raised the need for devising strategies to explain the decision process of these models.
arXiv Detail & Related papers (2022-05-10T09:28:14Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z) - Assessing glaucoma in retinal fundus photographs using Deep Feature
Consistent Variational Autoencoders [63.391402501241195]
glaucoma is challenging to detect since it remains asymptomatic until the symptoms are severe.
Early identification of glaucoma is generally made based on functional, structural, and clinical assessments.
Deep learning methods have partially solved this dilemma by bypassing the marker identification stage and analyzing high-level information directly to classify the data.
arXiv Detail & Related papers (2021-10-04T16:06:49Z) - Variational Knowledge Distillation for Disease Classification in Chest
X-Rays [102.04931207504173]
We propose itvariational knowledge distillation (VKD), which is a new probabilistic inference framework for disease classification based on X-rays.
We demonstrate the effectiveness of our method on three public benchmark datasets with paired X-ray images and EHRs.
arXiv Detail & Related papers (2021-03-19T14:13:56Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.