Exemplars and Counterexemplars Explanations for Image Classifiers,
Targeting Skin Lesion Labeling
- URL: http://arxiv.org/abs/2302.03033v1
- Date: Wed, 18 Jan 2023 11:14:42 GMT
- Title: Exemplars and Counterexemplars Explanations for Image Classifiers,
Targeting Skin Lesion Labeling
- Authors: Carlo Metta, Riccardo Guidotti, Yuan Yin, Patrick Gallinari, Salvatore
Rinzivillo
- Abstract summary: Explainable AI consists in developing mechanisms allowing for an interaction between decision systems and humans.
This is particularly important in sensitive contexts like in the medical domain.
We propose a use case study, for skin lesion diagnosis, illustrating how it is possible to provide the practitioner with explanations.
- Score: 26.17582232842832
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable AI consists in developing mechanisms allowing for an interaction
between decision systems and humans by making the decisions of the formers
understandable. This is particularly important in sensitive contexts like in
the medical domain. We propose a use case study, for skin lesion diagnosis,
illustrating how it is possible to provide the practitioner with explanations
on the decisions of a state of the art deep neural network classifier trained
to characterize skin lesions from examples. Our framework consists of a trained
classifier onto which an explanation module operates. The latter is able to
offer the practitioner exemplars and counterexemplars for the classification
diagnosis thus allowing the physician to interact with the automatic diagnosis
system. The exemplars are generated via an adversarial autoencoder. We
illustrate the behavior of the system on representative examples.
Related papers
- Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Pixel-Level Explanation of Multiple Instance Learning Models in
Biomedical Single Cell Images [52.527733226555206]
We investigate the use of four attribution methods to explain a multiple instance learning models.
We study two datasets of acute myeloid leukemia with over 100 000 single cell images.
We compare attribution maps with the annotations of a medical expert to see how the model's decision-making differs from the human standard.
arXiv Detail & Related papers (2023-03-15T14:00:11Z) - ExAID: A Multimodal Explanation Framework for Computer-Aided Diagnosis
of Skin Lesions [4.886872847478552]
ExAID (Explainable AI for Dermatology) is a novel framework for biomedical image analysis.
It provides multi-modal concept-based explanations consisting of easy-to-understand textual explanations.
It will be the basis for similar applications in other biomedical imaging fields.
arXiv Detail & Related papers (2022-01-04T17:11:28Z) - Explainable Deep Image Classifiers for Skin Lesion Diagnosis [16.483826925814522]
Key issue in critical contexts such as medical diagnosis is the interpretability of the deep learning models adopted in decision-making systems.
In this paper, we analyze a case study on skin lesion images where we customize an existing XAI approach for explaining a deep learning model able to recognize different types of skin lesions.
arXiv Detail & Related papers (2021-11-22T10:42:20Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - Using Causal Analysis for Conceptual Deep Learning Explanation [11.552000005640203]
An ideal explanation resembles the decision-making process of a domain expert.
We take advantage of radiology reports accompanying chest X-ray images to define concepts.
We construct a low-depth decision tree to translate all the discovered concepts into a straightforward decision rule.
arXiv Detail & Related papers (2021-07-10T00:01:45Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Explaining Predictions of Deep Neural Classifier via Activation Analysis [0.11470070927586014]
We present a novel approach to explain and support an interpretation of the decision-making process to a human expert operating a deep learning system based on Convolutional Neural Network (CNN)
Our results indicate that our method is capable of detecting distinct prediction strategies that enable us to identify the most similar predictions from an existing atlas.
arXiv Detail & Related papers (2020-12-03T20:36:19Z) - Explanation Ontology in Action: A Clinical Use-Case [3.1783442097247345]
We provide step-by-step guidance for system designers to utilize our Explanation Ontology.
We provide a detailed example with our utilization of this guidance in a clinical setting.
arXiv Detail & Related papers (2020-10-04T03:52:39Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.