Explain Any Concept: Segment Anything Meets Concept-Based Explanation
- URL: http://arxiv.org/abs/2305.10289v1
- Date: Wed, 17 May 2023 15:26:51 GMT
- Title: Explain Any Concept: Segment Anything Meets Concept-Based Explanation
- Authors: Ao Sun, Pingchuan Ma, Yuanyuan Yuan, Shuai Wang
- Abstract summary: Segment Anything Model (SAM) has been demonstrated as a powerful framework for performing precise and comprehensive instance segmentation.
We offer an effective and flexible concept-based explanation method, namely Explain Any Concept (EAC)
We thus propose a lightweight per-input equivalent (PIE) scheme, enabling efficient explanation with a surrogate model.
- Score: 11.433807960637685
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: EXplainable AI (XAI) is an essential topic to improve human understanding of
deep neural networks (DNNs) given their black-box internals. For computer
vision tasks, mainstream pixel-based XAI methods explain DNN decisions by
identifying important pixels, and emerging concept-based XAI explore forming
explanations with concepts (e.g., a head in an image). However, pixels are
generally hard to interpret and sensitive to the imprecision of XAI methods,
whereas "concepts" in prior works require human annotation or are limited to
pre-defined concept sets. On the other hand, driven by large-scale
pre-training, Segment Anything Model (SAM) has been demonstrated as a powerful
and promotable framework for performing precise and comprehensive instance
segmentation, enabling automatic preparation of concept sets from a given
image. This paper for the first time explores using SAM to augment
concept-based XAI. We offer an effective and flexible concept-based explanation
method, namely Explain Any Concept (EAC), which explains DNN decisions with any
concept. While SAM is highly effective and offers an "out-of-the-box" instance
segmentation, it is costly when being integrated into defacto XAI pipelines. We
thus propose a lightweight per-input equivalent (PIE) scheme, enabling
efficient explanation with a surrogate model. Our evaluation over two popular
datasets (ImageNet and COCO) illustrate the highly encouraging performance of
EAC over commonly-used XAI methods.
Related papers
- Explainable Concept Generation through Vision-Language Preference Learning [7.736445799116692]
Concept-based explanations have become a popular choice for explaining deep neural networks post-hoc.
We devise a reinforcement learning-based preference optimization algorithm that fine-tunes the vision-language generative model.
In addition to showing the efficacy and reliability of our method, we show how our method can be used as a diagnostic tool for analyzing neural networks.
arXiv Detail & Related papers (2024-08-24T02:26:42Z) - Discover-then-Name: Task-Agnostic Concept Bottlenecks via Automated Concept Discovery [52.498055901649025]
Concept Bottleneck Models (CBMs) have been proposed to address the 'black-box' problem of deep neural networks.
We propose a novel CBM approach -- called Discover-then-Name-CBM (DN-CBM) -- that inverts the typical paradigm.
Our concept extraction strategy is efficient, since it is agnostic to the downstream task, and uses concepts already known to the model.
arXiv Detail & Related papers (2024-07-19T17:50:11Z) - Improving the Explain-Any-Concept by Introducing Nonlinearity to the Trainable Surrogate Model [4.6040036610482655]
Explain Any Concept (EAC) model is a flexible method for explaining decisions.
EAC model is based on using a surrogate model which has one trainable linear layer to simulate the target model.
We show that by introducing an additional nonlinear layer to the original surrogate model, we can improve the performance of the EAC model.
arXiv Detail & Related papers (2024-05-20T07:25:09Z) - CoProNN: Concept-based Prototypical Nearest Neighbors for Explaining Vision Models [1.0855602842179624]
We present a novel approach that enables domain experts to quickly create concept-based explanations for computer vision tasks intuitively via natural language.
The modular design of CoProNN is simple to implement, it is straightforward to adapt to novel tasks and allows for replacing the classification and text-to-image models.
We show that our strategy competes very well with other concept-based XAI approaches on coarse grained image classification tasks and may even outperform those methods on more demanding fine grained tasks.
arXiv Detail & Related papers (2024-04-23T08:32:38Z) - Understanding Multimodal Deep Neural Networks: A Concept Selection View [29.08342307127578]
Concept-based models map the black-box visual representations extracted by deep neural networks onto a set of human-understandable concepts.
We propose a two-stage Concept Selection Model (CSM) to mine core concepts without introducing any human priors.
Our approach achieves comparable performance to end-to-end black-box models.
arXiv Detail & Related papers (2024-04-13T11:06:49Z) - Visual Concept-driven Image Generation with Text-to-Image Diffusion Model [65.96212844602866]
Text-to-image (TTI) models have demonstrated impressive results in generating high-resolution images of complex scenes.
Recent approaches have extended these methods with personalization techniques that allow them to integrate user-illustrated concepts.
However, the ability to generate images with multiple interacting concepts, such as human subjects, as well as concepts that may be entangled in one, or across multiple, image illustrations remains illusive.
We propose a concept-driven TTI personalization framework that addresses these core challenges.
arXiv Detail & Related papers (2024-02-18T07:28:37Z) - CEIR: Concept-based Explainable Image Representation Learning [0.4198865250277024]
We introduce Concept-based Explainable Image Representation (CEIR) to derive high-quality representations without label dependency.
Our method exhibits state-of-the-art unsupervised clustering performance on benchmarks such as CIFAR10, CIFAR100, and STL10.
CEIR can seamlessly extract the related concept from open-world images without fine-tuning.
arXiv Detail & Related papers (2023-12-17T15:37:41Z) - Shap-CAM: Visual Explanations for Convolutional Neural Networks based on
Shapley Value [86.69600830581912]
We develop a novel visual explanation method called Shap-CAM based on class activation mapping.
We demonstrate that Shap-CAM achieves better visual performance and fairness for interpreting the decision making process.
arXiv Detail & Related papers (2022-08-07T00:59:23Z) - Visual Concepts Tokenization [65.61987357146997]
We propose an unsupervised transformer-based Visual Concepts Tokenization framework, dubbed VCT, to perceive an image into a set of disentangled visual concept tokens.
To obtain these concept tokens, we only use cross-attention to extract visual information from the image tokens layer by layer without self-attention between concept tokens.
We further propose a Concept Disentangling Loss to facilitate that different concept tokens represent independent visual concepts.
arXiv Detail & Related papers (2022-05-20T11:25:31Z) - Human-Centered Concept Explanations for Neural Networks [47.71169918421306]
We introduce concept explanations including the class of Concept Activation Vectors (CAV)
We then discuss approaches to automatically extract concepts, and approaches to address some of their caveats.
Finally, we discuss some case studies that showcase the utility of such concept-based explanations in synthetic settings and real world applications.
arXiv Detail & Related papers (2022-02-25T01:27:31Z) - CX-ToM: Counterfactual Explanations with Theory-of-Mind for Enhancing
Human Trust in Image Recognition Models [84.32751938563426]
We propose a new explainable AI (XAI) framework for explaining decisions made by a deep convolutional neural network (CNN)
In contrast to the current methods in XAI that generate explanations as a single shot response, we pose explanation as an iterative communication process.
Our framework generates sequence of explanations in a dialog by mediating the differences between the minds of machine and human user.
arXiv Detail & Related papers (2021-09-03T09:46:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.