On Interpretability of Deep Learning based Skin Lesion Classifiers using
Concept Activation Vectors
- URL: http://arxiv.org/abs/2005.02000v1
- Date: Tue, 5 May 2020 08:27:16 GMT
- Title: On Interpretability of Deep Learning based Skin Lesion Classifiers using
Concept Activation Vectors
- Authors: Adriano Lucieri, Muhammad Naseer Bajwa, Stephan Alexander Braun,
Muhammad Imran Malik, Andreas Dengel and Sheraz Ahmed
- Abstract summary: We use a well-trained and high performing neural network for classification of three skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis.
Human understandable concepts are mapped to RECOD image classification model with the help of Concept Activation Vectors (CAVs)
- Score: 6.188009802619095
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning based medical image classifiers have shown remarkable prowess
in various application areas like ophthalmology, dermatology, pathology, and
radiology. However, the acceptance of these Computer-Aided Diagnosis (CAD)
systems in real clinical setups is severely limited primarily because their
decision-making process remains largely obscure. This work aims at elucidating
a deep learning based medical image classifier by verifying that the model
learns and utilizes similar disease-related concepts as described and employed
by dermatologists. We used a well-trained and high performing neural network
developed by REasoning for COmplex Data (RECOD) Lab for classification of three
skin tumours, i.e. Melanocytic Naevi, Melanoma and Seborrheic Keratosis and
performed a detailed analysis on its latent space. Two well established and
publicly available skin disease datasets, PH2 and derm7pt, are used for
experimentation. Human understandable concepts are mapped to RECOD image
classification model with the help of Concept Activation Vectors (CAVs),
introducing a novel training and significance testing paradigm for CAVs. Our
results on an independent evaluation set clearly shows that the classifier
learns and encodes human understandable concepts in its latent representation.
Additionally, TCAV scores (Testing with CAVs) suggest that the neural network
indeed makes use of disease-related concepts in the correct way when making
predictions. We anticipate that this work can not only increase confidence of
medical practitioners on CAD but also serve as a stepping stone for further
development of CAV-based neural network interpretation methods.
Related papers
- Concept-Attention Whitening for Interpretable Skin Lesion Diagnosis [7.5422729055429745]
We propose a novel Concept-Attention Whitening (CAW) framework for interpretable skin lesion diagnosis.
In the former branch, we train a convolutional neural network (CNN) with an inserted CAW layer to perform skin lesion diagnosis.
In the latter branch, the matrix is calculated under the guidance of the concept attention mask.
arXiv Detail & Related papers (2024-04-09T04:04:50Z) - Multi-task Explainable Skin Lesion Classification [54.76511683427566]
We propose a few-shot-based approach for skin lesions that generalizes well with few labelled data.
The proposed approach comprises a fusion of a segmentation network that acts as an attention module and classification network.
arXiv Detail & Related papers (2023-10-11T05:49:47Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Cross-modal Clinical Graph Transformer for Ophthalmic Report Generation [116.87918100031153]
We propose a Cross-modal clinical Graph Transformer (CGT) for ophthalmic report generation (ORG)
CGT injects clinical relation triples into the visual features as prior knowledge to drive the decoding procedure.
Experiments on the large-scale FFA-IR benchmark demonstrate that the proposed CGT is able to outperform previous benchmark methods.
arXiv Detail & Related papers (2022-06-04T13:16:30Z) - Feature visualization for convolutional neural network models trained on
neuroimaging data [0.0]
We show for the first time results using feature visualization of convolutional neural networks (CNNs)
We have trained CNNs for different tasks including sex classification and artificial lesion classification based on structural magnetic resonance imaging (MRI) data.
The resulting images reveal the learned concepts of the artificial lesions, including their shapes, but remain hard to interpret for abstract features in the sex classification task.
arXiv Detail & Related papers (2022-03-24T15:24:38Z) - BI-RADS-Net: An Explainable Multitask Learning Approach for Cancer
Diagnosis in Breast Ultrasound Images [69.41441138140895]
This paper introduces BI-RADS-Net, a novel explainable deep learning approach for cancer detection in breast ultrasound images.
The proposed approach incorporates tasks for explaining and classifying breast tumors, by learning feature representations relevant to clinical diagnosis.
Explanations of the predictions (benign or malignant) are provided in terms of morphological features that are used by clinicians for diagnosis and reporting in medical practice.
arXiv Detail & Related papers (2021-10-05T19:14:46Z) - In-Line Image Transformations for Imbalanced, Multiclass Computer Vision
Classification of Lung Chest X-Rays [91.3755431537592]
This study aims to leverage a body of literature in order to apply image transformations that would serve to balance the lack of COVID-19 LCXR data.
Deep learning techniques such as convolutional neural networks (CNNs) are able to select features that distinguish between healthy and disease states.
This study utilizes a simple CNN architecture for high-performance multiclass LCXR classification at 94 percent accuracy.
arXiv Detail & Related papers (2021-04-06T02:01:43Z) - Explaining Predictions of Deep Neural Classifier via Activation Analysis [0.11470070927586014]
We present a novel approach to explain and support an interpretation of the decision-making process to a human expert operating a deep learning system based on Convolutional Neural Network (CNN)
Our results indicate that our method is capable of detecting distinct prediction strategies that enable us to identify the most similar predictions from an existing atlas.
arXiv Detail & Related papers (2020-12-03T20:36:19Z) - Explaining Clinical Decision Support Systems in Medical Imaging using
Cycle-Consistent Activation Maximization [112.2628296775395]
Clinical decision support using deep neural networks has become a topic of steadily growing interest.
clinicians are often hesitant to adopt the technology because its underlying decision-making process is considered to be intransparent and difficult to comprehend.
We propose a novel decision explanation scheme based on CycleGAN activation which generates high-quality visualizations of classifier decisions even in smaller data sets.
arXiv Detail & Related papers (2020-10-09T14:39:27Z) - Interpretable Deep Models for Cardiac Resynchronisation Therapy Response
Prediction [8.152884957975354]
We propose a novel framework for image-based classification based on a variational autoencoder (VAE)
The VAE disentangles the latent space based on explanations' drawn from existing clinical knowledge.
We demonstrate our framework on the problem of predicting response of patients with cardiomyopathy to cardiac resynchronization therapy (CRT) from cine cardiac magnetic resonance images.
arXiv Detail & Related papers (2020-06-24T15:35:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.