Using StyleGAN for Visual Interpretability of Deep Learning Models on
Medical Images
- URL: http://arxiv.org/abs/2101.07563v1
- Date: Tue, 19 Jan 2021 11:13:20 GMT
- Title: Using StyleGAN for Visual Interpretability of Deep Learning Models on
Medical Images
- Authors: Kathryn Schutte, Olivier Moindrot, Paul H\'erent, Jean-Baptiste
Schiratti, Simon J\'egou
- Abstract summary: We propose a new interpretability method that can be used to understand the predictions of any black-box model on images.
A StyleGAN is trained on medical images to provide a mapping between latent vectors and images.
By shifting the latent representation of an input image along this direction, we can produce a series of new synthetic images with changed predictions.
- Score: 0.7874708385247352
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As AI-based medical devices are becoming more common in imaging fields like
radiology and histology, interpretability of the underlying predictive models
is crucial to expand their use in clinical practice. Existing heatmap-based
interpretability methods such as GradCAM only highlight the location of
predictive features but do not explain how they contribute to the prediction.
In this paper, we propose a new interpretability method that can be used to
understand the predictions of any black-box model on images, by showing how the
input image would be modified in order to produce different predictions. A
StyleGAN is trained on medical images to provide a mapping between latent
vectors and images. Our method identifies the optimal direction in the latent
space to create a change in the model prediction. By shifting the latent
representation of an input image along this direction, we can produce a series
of new synthetic images with changed predictions. We validate our approach on
histology and radiology images, and demonstrate its ability to provide
meaningful explanations that are more informative than GradCAM heatmaps. Our
method reveals the patterns learned by the model, which allows clinicians to
build trust in the model's predictions, discover new biomarkers and eventually
reveal potential biases.
Related papers
- MLIP: Enhancing Medical Visual Representation with Divergence Encoder
and Knowledge-guided Contrastive Learning [48.97640824497327]
We propose a novel framework leveraging domain-specific medical knowledge as guiding signals to integrate language information into the visual domain through image-text contrastive learning.
Our model includes global contrastive learning with our designed divergence encoder, local token-knowledge-patch alignment contrastive learning, and knowledge-guided category-level contrastive learning with expert knowledge.
Notably, MLIP surpasses state-of-the-art methods even with limited annotated data, highlighting the potential of multimodal pre-training in advancing medical representation learning.
arXiv Detail & Related papers (2024-02-03T05:48:50Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - A Lightweight Generative Model for Interpretable Subject-level Prediction [0.07989135005592125]
We propose a technique for single-subject prediction that is inherently interpretable.
Experiments demonstrate that the resulting model can be efficiently inverted to make accurate subject-level predictions.
arXiv Detail & Related papers (2023-06-19T18:20:29Z) - Prototype Learning for Explainable Brain Age Prediction [1.104960878651584]
We present ExPeRT, an explainable prototype-based model specifically designed for regression tasks.
Our proposed model makes a sample prediction from the distances to a set of learned prototypes in latent space, using a weighted mean of prototype labels.
Our approach achieved state-of-the-art prediction performance while providing insight into the model's reasoning process.
arXiv Detail & Related papers (2023-06-16T14:13:21Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - Visual Interpretable and Explainable Deep Learning Models for Brain
Tumor MRI and COVID-19 Chest X-ray Images [0.0]
We evaluate attribution methods for illuminating how deep neural networks analyze medical images.
We attribute predictions from brain tumor MRI and COVID-19 chest X-ray datasets made by recent deep convolutional neural network models.
arXiv Detail & Related papers (2022-08-01T16:05:14Z) - Contrastive Brain Network Learning via Hierarchical Signed Graph Pooling
Model [64.29487107585665]
Graph representation learning techniques on brain functional networks can facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
Here, we propose an interpretable hierarchical signed graph representation learning model to extract graph-level representations from brain functional networks.
In order to further improve the model performance, we also propose a new strategy to augment functional brain network data for contrastive learning.
arXiv Detail & Related papers (2022-07-14T20:03:52Z) - Interpretable Mammographic Image Classification using Cased-Based
Reasoning and Deep Learning [20.665935997959025]
We present a novel interpretable neural network algorithm that uses case-based reasoning for mammography.
Our network presents both a prediction of malignancy and an explanation of that prediction using known medical features.
arXiv Detail & Related papers (2021-07-12T17:42:09Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Deducing neighborhoods of classes from a fitted model [68.8204255655161]
In this article a new kind of interpretable machine learning method is presented.
It can help to understand the partitioning of the feature space into predicted classes in a classification model using quantile shifts.
Basically, real data points (or specific points of interest) are used and the changes of the prediction after slightly raising or decreasing specific features are observed.
arXiv Detail & Related papers (2020-09-11T16:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.