CheXplaining in Style: Counterfactual Explanations for Chest X-rays
using StyleGAN
- URL: http://arxiv.org/abs/2207.07553v1
- Date: Fri, 15 Jul 2022 15:51:08 GMT
- Title: CheXplaining in Style: Counterfactual Explanations for Chest X-rays
using StyleGAN
- Authors: Matan Atad, Vitalii Dmytrenko, Yitong Li, Xinyue Zhang, Matthias
Keicher, Jan Kirschke, Bene Wiestler, Ashkan Khakzar, Nassir Navab
- Abstract summary: We create counterfactual explanations for chest X-rays by manipulating specific latent directions in their latent space.
We clinically evaluate the relevancy of our counterfactual explanations with the help of radiologists.
- Score: 42.4322542603446
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep learning models used in medical image analysis are prone to raising
reliability concerns due to their black-box nature. To shed light on these
black-box models, previous works predominantly focus on identifying the
contribution of input features to the diagnosis, i.e., feature attribution. In
this work, we explore counterfactual explanations to identify what patterns the
models rely on for diagnosis. Specifically, we investigate the effect of
changing features within chest X-rays on the classifier's output to understand
its decision mechanism. We leverage a StyleGAN-based approach (StyleEx) to
create counterfactual explanations for chest X-rays by manipulating specific
latent directions in their latent space. In addition, we propose EigenFind to
significantly reduce the computation time of generated explanations. We
clinically evaluate the relevancy of our counterfactual explanations with the
help of radiologists. Our code is publicly available.
Related papers
- Explaining Chest X-ray Pathology Models using Textual Concepts [9.67960010121851]
We propose Conceptual Counterfactual Explanations for Chest X-ray (CoCoX)
We leverage the joint embedding space of an existing vision-language model (VLM) to explain black-box classifier outcomes without the need for annotated datasets.
We demonstrate that the explanations generated by our method are semantically meaningful and faithful to underlying pathologies.
arXiv Detail & Related papers (2024-06-30T01:31:54Z) - Position-Guided Prompt Learning for Anomaly Detection in Chest X-Rays [46.78926066405227]
Anomaly detection in chest X-rays is a critical task.
Recently, CLIP-based methods, pre-trained on a large number of medical images, have shown impressive performance on zero/few-shot downstream tasks.
We propose a position-guided prompt learning method to adapt the task data to the frozen CLIP-based model.
arXiv Detail & Related papers (2024-05-20T12:11:41Z) - I-AI: A Controllable & Interpretable AI System for Decoding
Radiologists' Intense Focus for Accurate CXR Diagnoses [9.260958560874812]
Interpretable Artificial Intelligence (I-AI) is a novel and unified controllable interpretable pipeline.
Our I-AI addresses three key questions: where a radiologist looks, how long they focus on specific areas, and what findings they diagnose.
arXiv Detail & Related papers (2023-09-24T04:48:44Z) - Chest X-ray Image Classification: A Causal Perspective [49.87607548975686]
We propose a causal approach to address the CXR classification problem, which constructs a structural causal model (SCM) and uses the backdoor adjustment to select effective visual information for CXR classification.
Experimental results demonstrate that our proposed method outperforms the open-source NIH ChestX-ray14 in terms of classification performance.
arXiv Detail & Related papers (2023-05-20T03:17:44Z) - Xplainer: From X-Ray Observations to Explainable Zero-Shot Diagnosis [36.45569352490318]
We introduce Xplainer, a framework for explainable zero-shot diagnosis in the clinical setting.
Xplainer adapts the classification-by-description approach of contrastive vision-language models to the multi-label medical diagnosis task.
Our results suggest that Xplainer provides a more detailed understanding of the decision-making process.
arXiv Detail & Related papers (2023-03-23T16:07:31Z) - The Manifold Hypothesis for Gradient-Based Explanations [55.01671263121624]
gradient-based explanation algorithms provide perceptually-aligned explanations.
We show that the more a feature attribution is aligned with the tangent space of the data, the more perceptually-aligned it tends to be.
We suggest that explanation algorithms should actively strive to align their explanations with the data manifold.
arXiv Detail & Related papers (2022-06-15T08:49:24Z) - Cross-Modal Contrastive Learning for Abnormality Classification and
Localization in Chest X-rays with Radiomics using a Feedback Loop [63.81818077092879]
We propose an end-to-end semi-supervised cross-modal contrastive learning framework for medical images.
We first apply an image encoder to classify the chest X-rays and to generate the image features.
The radiomic features are then passed through another dedicated encoder to act as the positive sample for the image features generated from the same chest X-ray.
arXiv Detail & Related papers (2021-04-11T09:16:29Z) - Explaining COVID-19 and Thoracic Pathology Model Predictions by
Identifying Informative Input Features [47.45835732009979]
Neural networks have demonstrated remarkable performance in classification and regression tasks on chest X-rays.
Features attribution methods identify the importance of input features for the output prediction.
We evaluate our methods using both human-centric (ground-truth-based) interpretability metrics, and human-independent feature importance metrics on NIH Chest X-ray8 and BrixIA datasets.
arXiv Detail & Related papers (2021-04-01T11:42:39Z) - Constructing and Evaluating an Explainable Model for COVID-19 Diagnosis
from Chest X-rays [15.664919899567288]
We focus on constructing models to assist a clinician in the diagnosis of COVID-19 patients in situations where it is easier and cheaper to obtain X-ray data than to obtain high-quality images like those from CT scans.
Deep neural networks have repeatedly been shown to be capable of constructing highly predictive models for disease detection directly from image data.
arXiv Detail & Related papers (2020-12-19T21:33:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.