Interactive Medical Image Analysis with Concept-based Similarity Reasoning
- URL: http://arxiv.org/abs/2503.06873v2
- Date: Tue, 11 Mar 2025 09:06:03 GMT
- Title: Interactive Medical Image Analysis with Concept-based Similarity Reasoning
- Authors: Ta Duc Huy, Sen Kim Tran, Phan Nguyen, Nguyen Hoang Tran, Tran Bao Sam, Anton van den Hengel, Zhibin Liao, Johan W. Verjans, Minh-Son To, Vu Minh Hieu Phan,
- Abstract summary: Concept-based Similarity Reasoning network (CSR) provides patch-level prototype with intrinsic concept interpretation.<n>CSR improves upon prior state-of-the-art interpretable methods by up to 4.5% across three biomedical datasets.
- Score: 32.38056136570339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The ability to interpret and intervene model decisions is important for the adoption of computer-aided diagnosis methods in clinical workflows. Recent concept-based methods link the model predictions with interpretable concepts and modify their activation scores to interact with the model. However, these concepts are at the image level, which hinders the model from pinpointing the exact patches the concepts are activated. Alternatively, prototype-based methods learn representations from training image patches and compare these with test image patches, using the similarity scores for final class prediction. However, interpreting the underlying concepts of these patches can be challenging and often necessitates post-hoc guesswork. To address this issue, this paper introduces the novel Concept-based Similarity Reasoning network (CSR), which offers (i) patch-level prototype with intrinsic concept interpretation, and (ii) spatial interactivity. First, the proposed CSR provides localized explanation by grounding prototypes of each concept on image regions. Second, our model introduces novel spatial-level interaction, allowing doctors to engage directly with specific image areas, making it an intuitive and transparent tool for medical imaging. CSR improves upon prior state-of-the-art interpretable methods by up to 4.5\% across three biomedical datasets. Our code is released at https://github.com/tadeephuy/InteractCSR.
Related papers
- COMIX: Compositional Explanations using Prototypes [46.15031477955461]
We propose a method to align machine representations with human understanding.<n>The proposed method, named COMIX, classifies an image by decomposing it into regions based on learned concepts.<n>We show that our method provides fidelity of explanations and shows that the efficiency is competitive with other inherently interpretable architectures.
arXiv Detail & Related papers (2025-01-10T15:40:31Z) - Label-free Concept Based Multiple Instance Learning for Gigapixel Histopathology [3.3353153750791718]
Multiple Instance Learning (MIL) methods allow for gigapixel Whole-Slide Image (WSI) analysis with only slide-level annotations.<n>We propose a novel inherently interpretable WSI-classification approach that uses human-understandable pathology concepts to generate explanations.
arXiv Detail & Related papers (2025-01-06T11:03:04Z) - Visual-TCAV: Concept-based Attribution and Saliency Maps for Post-hoc Explainability in Image Classification [3.9626211140865464]
Convolutional Neural Networks (CNNs) have seen significant performance improvements in recent years.
However, due to their size and complexity, they function as black-boxes, leading to transparency concerns.
This paper introduces a novel post-hoc explainability framework, Visual-TCAV, which aims to bridge the gap between these methods.
arXiv Detail & Related papers (2024-11-08T16:52:52Z) - Autoregressive Sequence Modeling for 3D Medical Image Representation [48.706230961589924]
We introduce a pioneering method for learning 3D medical image representations through an autoregressive sequence pre-training framework.
Our approach various 3D medical images based on spatial, contrast, and semantic correlations, treating them as interconnected visual tokens within a token sequence.
arXiv Detail & Related papers (2024-09-13T10:19:10Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - MICA: Towards Explainable Skin Lesion Diagnosis via Multi-Level
Image-Concept Alignment [4.861768967055006]
We propose a multi-modal explainable disease diagnosis framework that meticulously aligns medical images and clinical-related concepts semantically at multiple strata.
Our method, while preserving model interpretability, attains high performance and label efficiency for concept detection and disease diagnosis.
arXiv Detail & Related papers (2024-01-16T17:45:01Z) - Cross-Modal Conceptualization in Bottleneck Models [21.2577097041883]
Concept Bottleneck Models (CBMs) assume that training examples (e.g., x-ray images) are annotated with high-level concepts.
In our approach, we adopt a more moderate assumption and instead use text descriptions, accompanying the images in training, to guide the induction of concepts.
Our cross-modal approach treats concepts as discrete latent variables and promotes concepts that (1) are predictive of the label, and (2) can be predicted reliably from both the image and text.
arXiv Detail & Related papers (2023-10-23T11:00:19Z) - Robust and Interpretable Medical Image Classifiers via Concept
Bottleneck Models [49.95603725998561]
We propose a new paradigm to build robust and interpretable medical image classifiers with natural language concepts.
Specifically, we first query clinical concepts from GPT-4, then transform latent image features into explicit concepts with a vision-language model.
arXiv Detail & Related papers (2023-10-04T21:57:09Z) - Improving Human-Object Interaction Detection via Virtual Image Learning [68.56682347374422]
Human-Object Interaction (HOI) detection aims to understand the interactions between humans and objects.
In this paper, we propose to alleviate the impact of such an unbalanced distribution via Virtual Image Leaning (VIL)
A novel label-to-image approach, Multiple Steps Image Creation (MUSIC), is proposed to create a high-quality dataset that has a consistent distribution with real images.
arXiv Detail & Related papers (2023-08-04T10:28:48Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.