Concept Bottleneck with Visual Concept Filtering for Explainable Medical
Image Classification
- URL: http://arxiv.org/abs/2308.11920v1
- Date: Wed, 23 Aug 2023 05:04:01 GMT
- Title: Concept Bottleneck with Visual Concept Filtering for Explainable Medical
Image Classification
- Authors: Injae Kim, Jongha Kim, Joonmyung Choi, Hyunwoo J. Kim
- Abstract summary: Concept Bottleneck Models (CBMs) enable interpretable image classification by utilizing human-understandable concepts as intermediate targets.
We propose a visual activation score that measures whether the concept contains visual cues or not.
Computed visual activation scores are then used to filter out the less visible concepts, thus resulting in a final concept set with visually meaningful concepts.
- Score: 16.849592713393896
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Interpretability is a crucial factor in building reliable models for various
medical applications. Concept Bottleneck Models (CBMs) enable interpretable
image classification by utilizing human-understandable concepts as intermediate
targets. Unlike conventional methods that require extensive human labor to
construct the concept set, recent works leveraging Large Language Models (LLMs)
for generating concepts made automatic concept generation possible. However,
those methods do not consider whether a concept is visually relevant or not,
which is an important factor in computing meaningful concept scores. Therefore,
we propose a visual activation score that measures whether the concept contains
visual cues or not, which can be easily computed with unlabeled image data.
Computed visual activation scores are then used to filter out the less visible
concepts, thus resulting in a final concept set with visually meaningful
concepts. Our experimental results show that adopting the proposed visual
activation score for concept filtering consistently boosts performance compared
to the baseline. Moreover, qualitative analyses also validate that visually
relevant concepts are successfully selected with the visual activation score.
Related papers
- Concept Complement Bottleneck Model for Interpretable Medical Image Diagnosis [8.252227380729188]
We propose a concept complement bottleneck model for interpretable medical image diagnosis.
We propose to use concept adapters for specific concepts to mine the concept differences and score concepts in their own attention channels.
Our model outperforms the state-of-the-art competitors in concept detection and disease diagnosis tasks.
arXiv Detail & Related papers (2024-10-20T16:52:09Z) - CusConcept: Customized Visual Concept Decomposition with Diffusion Models [13.95568624067449]
We propose a two-stage framework, CusConcept, to extract customized visual concept embedding vectors.
In the first stage, CusConcept employs a vocabularies-guided concept decomposition mechanism.
In the second stage, joint concept refinement is performed to enhance the fidelity and quality of generated images.
arXiv Detail & Related papers (2024-10-01T04:41:44Z) - Towards Compositionality in Concept Learning [20.960438848942445]
We show that existing unsupervised concept extraction methods find concepts which are not compositional.
We propose Compositional Concept Extraction (CCE) for finding concepts which obey these properties.
CCE finds more compositional concept representations than baselines and yields better accuracy on four downstream classification tasks.
arXiv Detail & Related papers (2024-06-26T17:59:30Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - Incremental Residual Concept Bottleneck Models [29.388549499546556]
Concept Bottleneck Models (CBMs) map the black-box visual representations extracted by deep neural networks onto a set of interpretable concepts.
We propose the Incremental Residual Concept Bottleneck Model (Res-CBM) to address the challenge of concept completeness.
Our approach can be applied to any user-defined concept bank, as a post-hoc processing method to enhance the performance of any CBMs.
arXiv Detail & Related papers (2024-04-13T12:02:19Z) - Separable Multi-Concept Erasure from Diffusion Models [52.51972530398691]
We propose a Separable Multi-concept Eraser (SepME) to eliminate unsafe concepts from large-scale diffusion models.
The latter separates optimizable model weights, making each weight increment correspond to a specific concept erasure.
Extensive experiments indicate the efficacy of our approach in eliminating concepts, preserving model performance, and offering flexibility in the erasure or recovery of various concepts.
arXiv Detail & Related papers (2024-02-03T11:10:57Z) - Advancing Ante-Hoc Explainable Models through Generative Adversarial Networks [24.45212348373868]
This paper presents a novel concept learning framework for enhancing model interpretability and performance in visual classification tasks.
Our approach appends an unsupervised explanation generator to the primary classifier network and makes use of adversarial training.
This work presents a significant step towards building inherently interpretable deep vision models with task-aligned concept representations.
arXiv Detail & Related papers (2024-01-09T16:16:16Z) - ConceptBed: Evaluating Concept Learning Abilities of Text-to-Image
Diffusion Models [79.10890337599166]
We introduce ConceptBed, a large-scale dataset that consists of 284 unique visual concepts and 33K composite text prompts.
We evaluate visual concepts that are either objects, attributes, or styles, and also evaluate four dimensions of compositionality: counting, attributes, relations, and actions.
Our results point to a trade-off between learning the concepts and preserving the compositionality which existing approaches struggle to overcome.
arXiv Detail & Related papers (2023-06-07T18:00:38Z) - Concept Gradient: Concept-based Interpretation Without Linear Assumption [77.96338722483226]
Concept Activation Vector (CAV) relies on learning a linear relation between some latent representation of a given model and concepts.
We proposed Concept Gradient (CG), extending concept-based interpretation beyond linear concept functions.
We demonstrated CG outperforms CAV in both toy examples and real world datasets.
arXiv Detail & Related papers (2022-08-31T17:06:46Z) - Visual Concepts Tokenization [65.61987357146997]
We propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens.
To obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens.
We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts.
arXiv Detail & Related papers (2022-05-20T11:25:31Z) - Interpretable Visual Reasoning via Induced Symbolic Space [75.95241948390472]
We study the problem of concept induction in visual reasoning, i.e., identifying concepts and their hierarchical relationships from question-answer pairs associated with images.
We first design a new framework named object-centric compositional attention model (OCCAM) to perform the visual reasoning task with object-level visual features.
We then come up with a method to induce concepts of objects and relations using clues from the attention patterns between objects' visual features and question words.
arXiv Detail & Related papers (2020-11-23T18:21:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.