Locality-aware Concept Bottleneck Model
- URL: http://arxiv.org/abs/2508.14562v1
- Date: Wed, 20 Aug 2025 09:33:48 GMT
- Title: Locality-aware Concept Bottleneck Model
- Authors: Sujin Jeon, Hyundo Lee, Eungseo Kim, Sanghack Lee, Byoung-Tak Zhang, Inwoo Hwang,
- Abstract summary: Concept bottleneck models (CBMs) are inherently interpretable models that make predictions based on human-understandable visual cues.<n>We propose a framework, coined Locality-aware Concept Bottleneck Model (LCBM), which adopts prototype learning to ensure accurate spatial localization.
- Score: 25.42941510314879
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Concept bottleneck models (CBMs) are inherently interpretable models that make predictions based on human-understandable visual cues, referred to as concepts. As obtaining dense concept annotations with human labeling is demanding and costly, recent approaches utilize foundation models to determine the concepts existing in the images. However, such label-free CBMs often fail to localize concepts in relevant regions, attending to visually unrelated regions when predicting concept presence. To this end, we propose a framework, coined Locality-aware Concept Bottleneck Model (LCBM), which utilizes rich information from foundation models and adopts prototype learning to ensure accurate spatial localization of the concepts. Specifically, we assign one prototype to each concept, promoted to represent a prototypical image feature of that concept. These prototypes are learned by encouraging them to encode similar local regions, leveraging foundation models to assure the relevance of each prototype to its associated concept. Then we use the prototypes to facilitate the learning process of identifying the proper local region from which each concept should be predicted. Experimental results demonstrate that LCBM effectively identifies present concepts in the images and exhibits improved localization while maintaining comparable classification performance.
Related papers
- A Geometric Unification of Concept Learning with Concept Cones [58.70836885177496]
Two traditions of interpretability have evolved side by side but seldom spoken to each other: Concept Bottleneck Models (CBMs) and Sparse Autoencoders (SAEs)<n>We show that both paradigms instantiate the same geometric structure.<n>CBMs provide human-defined reference geometries, while SAEs can be evaluated by how well their learned cones approximate or contain those of CBMs.
arXiv Detail & Related papers (2025-12-08T09:51:46Z) - FaCT: Faithful Concept Traces for Explaining Neural Network Decisions [56.796533084868884]
Deep networks have shown remarkable performance across a wide range of tasks, yet getting a global concept-level understanding of how they function remains a key challenge.<n>We put emphasis on the faithfulness of concept-based explanations and propose a new model with model-inherent mechanistic concept-explanations.<n>Our concepts are shared across classes and, from any layer, their contribution to the logit and their input-visualization can be faithfully traced.
arXiv Detail & Related papers (2025-10-29T13:35:46Z) - CE-FAM: Concept-Based Explanation via Fusion of Activation Maps [29.496537151017616]
Concept-based Explanation via Fusion of Activation Maps (CE-FAM)<n>We propose a novel concept-based explanation method, Concept-based Explanation via Fusion of Activation Maps (CE-FAM).<n>Our method provides a general framework for identifying the concept regions and their contributions while leveraging VLM knowledge to handle arbitrary concepts without requiring an annotated dataset.
arXiv Detail & Related papers (2025-09-28T12:40:53Z) - Show and Tell: Visually Explainable Deep Neural Nets via Spatially-Aware Concept Bottleneck Models [5.985204759362746]
We present a unified framework for transforming any vision neural network into a spatially and conceptually interpretable model.<n>We name this method "Spatially-Aware and Label-Free Concept Bottleneck Model" (SALF-CBM)
arXiv Detail & Related papers (2025-02-27T14:27:55Z) - Concept-Based Explainable Artificial Intelligence: Metrics and Benchmarks [0.0]
Concept-based explanation methods aim to improve the interpretability of machine learning models.<n>We propose three metrics: the concept global importance metric, the concept existence metric, and the concept location metric.<n>We demonstrate that, in many cases, even the most important concepts determined by post-hoc CBMs are not present in input images.
arXiv Detail & Related papers (2025-01-31T16:32:36Z) - OmniPrism: Learning Disentangled Visual Concept for Image Generation [57.21097864811521]
Creative visual concept generation often draws inspiration from specific concepts in a reference image to produce relevant outcomes.<n>We propose OmniPrism, a visual concept disentangling approach for creative image generation.<n>Our method learns disentangled concept representations guided by natural language and trains a diffusion model to incorporate these concepts.
arXiv Detail & Related papers (2024-12-16T18:59:52Z) - Concept Replacer: Replacing Sensitive Concepts in Diffusion Models via Precision Localization [48.20360860166279]
Large-scale diffusion models produce high-quality images but often generate unwanted content, such as sexually explicit or violent content.<n>We propose a novel approach for targeted concept replacing in diffusion models, enabling specific concepts to be removed without affecting non-target areas.<n>Our method introduces a dedicated concept localizer for precisely identifying the target concept during the denoising process, trained with few-shot learning to require minimal labeled data.<n>Within the identified region, we introduce a training-free Dual Prompts Cross-Attention (DPCA) module to substitute the target concept, ensuring minimal disruption to surrounding content.
arXiv Detail & Related papers (2024-12-02T08:05:39Z) - TIDE: Training Locally Interpretable Domain Generalization Models Enables Test-time Correction [14.396966854171273]
We consider the problem of single-source domain generalization.<n>Existing methods typically rely on extensive augmentations to synthetically cover diverse domains during training.<n>We propose an approach that compels models to leverage such local concepts during prediction.
arXiv Detail & Related papers (2024-11-25T08:46:37Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Improving Intervention Efficacy via Concept Realignment in Concept Bottleneck Models [57.86303579812877]
Concept Bottleneck Models (CBMs) ground image classification on human-understandable concepts to allow for interpretable model decisions.
Existing approaches often require numerous human interventions per image to achieve strong performances.
We introduce a trainable concept realignment intervention module, which leverages concept relations to realign concept assignments post-intervention.
arXiv Detail & Related papers (2024-05-02T17:59:01Z) - On the Concept Trustworthiness in Concept Bottleneck Models [39.928868605678744]
Concept Bottleneck Models (CBMs) break down the reasoning process into the input-to-concept mapping and the concept-to-label prediction.
Despite the transparency of the concept-to-label prediction, the mapping from the input to the intermediate concept remains a black box.
A pioneering metric, referred to as concept trustworthiness score, is proposed to gauge whether the concepts are derived from relevant regions.
An enhanced CBM is introduced, enabling concept predictions to be made specifically from distinct parts of the feature map.
arXiv Detail & Related papers (2024-03-21T12:24:53Z) - ConcEPT: Concept-Enhanced Pre-Training for Language Models [57.778895980999124]
ConcEPT aims to infuse conceptual knowledge into pre-trained language models.
It exploits external entity concept prediction to predict the concepts of entities mentioned in the pre-training contexts.
Results of experiments show that ConcEPT gains improved conceptual knowledge with concept-enhanced pre-training.
arXiv Detail & Related papers (2024-01-11T05:05:01Z) - Do Concept Bottleneck Models Respect Localities? [14.77558378567965]
Concept-based explainability methods use human-understandable intermediaries to produce explanations for machine learning models.<n>We assess whether concept predictors leverage "relevant" features to make predictions, a term we call locality.<n>We find that many concept-based models used in practice fail to respect localities because concept predictors cannot always clearly distinguish distinct concepts.
arXiv Detail & Related papers (2024-01-02T16:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.