Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging
- URL: http://arxiv.org/abs/2503.09535v1
- Date: Wed, 12 Mar 2025 16:52:52 GMT
- Title: Evaluating Visual Explanations of Attention Maps for Transformer-based Medical Imaging
- Authors: Minjae Chung, Jong Bum Won, Ganghyun Kim, Yujin Kim, Utku Ozbulak,
- Abstract summary: We compare visual explanations of attention maps to other commonly used methods for medical imaging problems.<n>We find that attention maps show promise under certain conditions and generally surpass GradCAM in explainability.<n>Our findings indicate that the efficacy of attention maps as a method of interpretability is context-dependent and may be limited as they do not consistently provide the comprehensive insights required for robust medical decision-making.
- Score: 2.6505619784178047
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Although Vision Transformers (ViTs) have recently demonstrated superior performance in medical imaging problems, they face explainability issues similar to previous architectures such as convolutional neural networks. Recent research efforts suggest that attention maps, which are part of decision-making process of ViTs can potentially address the explainability issue by identifying regions influencing predictions, especially in models pretrained with self-supervised learning. In this work, we compare the visual explanations of attention maps to other commonly used methods for medical imaging problems. To do so, we employ four distinct medical imaging datasets that involve the identification of (1) colonic polyps, (2) breast tumors, (3) esophageal inflammation, and (4) bone fractures and hardware implants. Through large-scale experiments on the aforementioned datasets using various supervised and self-supervised pretrained ViTs, we find that although attention maps show promise under certain conditions and generally surpass GradCAM in explainability, they are outperformed by transformer-specific interpretability methods. Our findings indicate that the efficacy of attention maps as a method of interpretability is context-dependent and may be limited as they do not consistently provide the comprehensive insights required for robust medical decision-making.
Related papers
- Hierarchical Vision Transformer with Prototypes for Interpretable Medical Image Classification [0.0]
We present HierViT, a Vision Transformer that is inherently interpretable and adapts its reasoning to that of humans.<n>It is evaluated on two medical benchmark datasets, LIDC-IDRI for lung assessment and derm7pt for skin lesion classification.
arXiv Detail & Related papers (2025-02-13T06:24:07Z) - GEM: Context-Aware Gaze EstiMation with Visual Search Behavior Matching for Chest Radiograph [32.1234295417225]
We propose a context-aware Gaze EstiMation (GEM) network that utilizes eye gaze data collected from radiologists to simulate their visual search behavior patterns.
It consists of a context-awareness module, visual behavior graph construction, and visual behavior matching.
Experiments on four publicly available datasets demonstrate the superiority of GEM over existing methods.
arXiv Detail & Related papers (2024-08-10T09:46:25Z) - An Early Investigation into the Utility of Multimodal Large Language Models in Medical Imaging [0.3029213689620348]
We explore the potential of the Gemini (textitgemini-1.0-pro-vision-latest) and GPT-4V models for medical image analysis.
Both Gemini AI and GPT-4V are first used to classify real versus synthetic images, followed by an interpretation and analysis of the input images.
Our early investigation presented in this work provides insights into the potential of MLLMs to assist with the classification and interpretation of retinal fundoscopy and lung X-ray images.
arXiv Detail & Related papers (2024-06-02T08:29:23Z) - Mining Gaze for Contrastive Learning toward Computer-Assisted Diagnosis [61.089776864520594]
We propose eye-tracking as an alternative to text reports for medical images.
By tracking the gaze of radiologists as they read and diagnose medical images, we can understand their visual attention and clinical reasoning.
We introduce the Medical contrastive Gaze Image Pre-training (McGIP) as a plug-and-play module for contrastive learning frameworks.
arXiv Detail & Related papers (2023-12-11T02:27:45Z) - A Recent Survey of Vision Transformers for Medical Image Segmentation [2.4895533667182703]
Vision Transformers (ViTs) have emerged as a promising technique for addressing the challenges in medical image segmentation.
Their multi-scale attention mechanism enables effective modeling of long-range dependencies between distant structures.
Recently, researchers have come up with various ViT-based approaches that incorporate CNNs in their architectures, known as Hybrid Vision Transformers (HVTs)
arXiv Detail & Related papers (2023-12-01T14:54:44Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Towards Evaluating Explanations of Vision Transformers for Medical
Imaging [7.812073412066698]
Vision Transformer (ViT) is a promising alternative to convolutional neural networks for image classification.
This paper investigates the performance of various interpretation methods on a ViT applied to classify chest X-ray images.
arXiv Detail & Related papers (2023-04-12T19:37:28Z) - Data-Efficient Vision Transformers for Multi-Label Disease
Classification on Chest Radiographs [55.78588835407174]
Vision Transformers (ViTs) have not been applied to this task despite their high classification performance on generic images.
ViTs do not rely on convolutions but on patch-based self-attention and in contrast to CNNs, no prior knowledge of local connectivity is present.
Our results show that while the performance between ViTs and CNNs is on par with a small benefit for ViTs, DeiTs outperform the former if a reasonably large data set is available for training.
arXiv Detail & Related papers (2022-08-17T09:07:45Z) - An Interpretable Multiple-Instance Approach for the Detection of
referable Diabetic Retinopathy from Fundus Images [72.94446225783697]
We propose a machine learning system for the detection of referable Diabetic Retinopathy in fundus images.
By extracting local information from image patches and combining it efficiently through an attention mechanism, our system is able to achieve high classification accuracy.
We evaluate our approach on publicly available retinal image datasets, in which it exhibits near state-of-the-art performance.
arXiv Detail & Related papers (2021-03-02T13:14:15Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Proactive Pseudo-Intervention: Causally Informed Contrastive Learning
For Interpretable Vision Models [103.64435911083432]
We present a novel contrastive learning strategy called it Proactive Pseudo-Intervention (PPI)
PPI leverages proactive interventions to guard against image features with no causal relevance.
We also devise a novel causally informed salience mapping module to identify key image pixels to intervene, and show it greatly facilitates model interpretability.
arXiv Detail & Related papers (2020-12-06T20:30:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.