Compositional Explanations for Image Classifiers
- URL: http://arxiv.org/abs/2103.03622v1
- Date: Fri, 5 Mar 2021 11:54:14 GMT
- Title: Compositional Explanations for Image Classifiers
- Authors: Hana Chockler, Daniel Kroening, Youcheng Sun
- Abstract summary: We present a novel, black-box algorithm for computing explanations that uses a principled approach based on causal theory.
We implement the method in the tool CET (Compositional Explanation Tool)
- Score: 18.24535957515688
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Existing algorithms for explaining the output of image classifiers perform
poorly on inputs where the object of interest is partially occluded. We present
a novel, black-box algorithm for computing explanations that uses a principled
approach based on causal theory. We implement the method in the tool CET
(Compositional Explanation Tool). Owing to the compositionality in its
algorithm, CET computes explanations that are much more accurate than those
generated by the existing explanation tools on images with occlusions and
delivers a level of performance comparable to the state of the art when
explaining images without occlusions.
Related papers
- P-TAME: Explain Any Image Classifier with Trained Perturbations [14.31574090533474]
P-TAME (Perturbation-based Trainable Attention Mechanism for Explanations) is a model-agnostic method for explaining Deep Neural Networks (DNNs)
It generates high-resolution explanations in a single forward pass during inference.
We apply P-TAME to explain the decisions of VGG-16, ResNet-50, and ViT-B-16, three distinct and widely used image classifiers.
arXiv Detail & Related papers (2025-01-29T18:06:08Z) - COMIX: Compositional Explanations using Prototypes [46.15031477955461]
We propose a method to align machine representations with human understanding.
The proposed method, named COMIX, classifies an image by decomposing it into regions based on learned concepts.
We show that our method provides fidelity of explanations and shows that the efficiency is competitive with other inherently interpretable architectures.
arXiv Detail & Related papers (2025-01-10T15:40:31Z) - Causal Explanations for Image Classifiers [17.736724129275043]
We present a novel black-box approach to computing explanations grounded in the theory of actual causality.
We present an algorithm for computing approximate explanations based on these definitions.
We demonstrate that rex is the most efficient tool and produces the smallest explanations.
arXiv Detail & Related papers (2024-11-13T18:52:42Z) - Finetuning CLIP to Reason about Pairwise Differences [52.028073305958074]
We propose an approach to train vision-language models such as CLIP in a contrastive manner to reason about differences in embedding space.
We first demonstrate that our approach yields significantly improved capabilities in ranking images by a certain attribute.
We also illustrate that the resulting embeddings obey a larger degree of geometric properties in embedding space.
arXiv Detail & Related papers (2024-09-15T13:02:14Z) - Taming CLIP for Fine-grained and Structured Visual Understanding of Museum Exhibits [59.66134971408414]
We aim to adapt CLIP for fine-grained and structured understanding of museum exhibits.
Our dataset is the first of its kind in the public domain.
The proposed method (MUZE) learns to map CLIP's image embeddings to the tabular structure by means of a proposed transformer-based parsing network (parseNet)
arXiv Detail & Related papers (2024-09-03T08:13:06Z) - Interpretable Network Visualizations: A Human-in-the-Loop Approach for Post-hoc Explainability of CNN-based Image Classification [5.087579454836169]
State-of-the-art explainability methods generate saliency maps to show where a specific class is identified.
We introduce a post-hoc method that explains the entire feature extraction process of a Convolutional Neural Network.
We also show an approach to generate global explanations by aggregating labels across multiple images.
arXiv Detail & Related papers (2024-05-06T09:21:35Z) - Multiple Different Black Box Explanations for Image Classifiers [14.182742896993974]
We describe an algorithm and a tool, MultiReX, for computing multiple explanations of the output of a black-box image classifier for a given image.
Our algorithm uses a principled approach based on causal theory.
We show that MultiReX finds multiple explanations on 96% of the images in the ImageNet-mini benchmark, whereas previous work finds multiple explanations only on 11%.
arXiv Detail & Related papers (2023-09-25T17:28:28Z) - No Token Left Behind: Explainability-Aided Image Classification and
Generation [79.4957965474334]
We present a novel explainability-based approach, which adds a loss term to ensure that CLIP focuses on all relevant semantic parts of the input.
Our method yields an improvement in the recognition rate, without additional training or fine-tuning.
arXiv Detail & Related papers (2022-04-11T07:16:39Z) - Compositional Sketch Search [91.84489055347585]
We present an algorithm for searching image collections using free-hand sketches.
We exploit drawings as a concise and intuitive representation for specifying entire scene compositions.
arXiv Detail & Related papers (2021-06-15T09:38:09Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Efficient and Parallel Separable Dictionary Learning [2.6905021039717987]
We describe a highly parallelizable algorithm that learns such dictionaries.
We highlight the performance of the proposed method to sparsely represent image and hyperspectral data, and for image denoising.
arXiv Detail & Related papers (2020-07-07T21:46:32Z) - Explainable Image Classification with Evidence Counterfactual [0.0]
We introduce SEDC as a model-agnostic instance-level explanation method for image classification.
For a given image, SEDC searches a small set of segments that, in case of removal, alters the classification.
We compare SEDC(-T) with popular feature importance methods such as LRP, LIME and SHAP, and we describe how the mentioned importance ranking issues are addressed.
arXiv Detail & Related papers (2020-04-16T08:02:48Z) - Explain and Improve: LRP-Inference Fine-Tuning for Image Captioning
Models [82.3793660091354]
This paper analyzes the predictions of image captioning models with attention mechanisms beyond visualizing the attention itself.
We develop variants of layer-wise relevance propagation (LRP) and gradient-based explanation methods, tailored to image captioning models with attention mechanisms.
arXiv Detail & Related papers (2020-01-04T05:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.