Combining Counterfactuals With Shapley Values To Explain Image Models
- URL: http://arxiv.org/abs/2206.07087v1
- Date: Tue, 14 Jun 2022 18:23:58 GMT
- Title: Combining Counterfactuals With Shapley Values To Explain Image Models
- Authors: Aditya Lahiri, Kamran Alipour, Ehsan Adeli, Babak Salimi
- Abstract summary: We develop a pipeline to generate counterfactuals and estimate Shapley values.
We obtain contrastive and interpretable explanations with strong axiomatic guarantees.
- Score: 13.671174461441304
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the widespread use of sophisticated machine learning models in sensitive
applications, understanding their decision-making has become an essential task.
Models trained on tabular data have witnessed significant progress in
explanations of their underlying decision making processes by virtue of having
a small number of discrete features. However, applying these methods to
high-dimensional inputs such as images is not a trivial task. Images are
composed of pixels at an atomic level and do not carry any interpretability by
themselves. In this work, we seek to use annotated high-level interpretable
features of images to provide explanations. We leverage the Shapley value
framework from Game Theory, which has garnered wide acceptance in general XAI
problems. By developing a pipeline to generate counterfactuals and subsequently
using it to estimate Shapley values, we obtain contrastive and interpretable
explanations with strong axiomatic guarantees.
Related papers
- Enhancing Counterfactual Image Generation Using Mahalanobis Distance with Distribution Preferences in Feature Space [7.00851481261778]
In the realm of Artificial Intelligence (AI), the importance of Explainable Artificial Intelligence (XAI) is increasingly recognized.
One notable single-instance XAI approach is counterfactual explanation, which aids users in comprehending a model's decisions.
This paper introduces a novel method for computing feature importance within the feature space of a black-box model.
arXiv Detail & Related papers (2024-05-31T08:26:53Z) - Pink: Unveiling the Power of Referential Comprehension for Multi-modal
LLMs [49.88461345825586]
This paper proposes a new framework to enhance the fine-grained image understanding abilities of MLLMs.
We present a new method for constructing the instruction tuning dataset at a low cost by leveraging annotations in existing datasets.
We show that our model exhibits a 5.2% accuracy improvement over Qwen-VL and surpasses the accuracy of Kosmos-2 by 24.7%.
arXiv Detail & Related papers (2023-10-01T05:53:15Z) - Interpreting Vision and Language Generative Models with Semantic Visual
Priors [3.3772986620114374]
We develop a framework based on SHAP that allows for generating meaningful explanations leveraging the meaning representation of the output sequence as a whole.
We demonstrate that our method generates semantically more expressive explanations than traditional methods at a lower compute cost.
arXiv Detail & Related papers (2023-04-28T17:10:08Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - STEEX: Steering Counterfactual Explanations with Semantics [28.771471624014065]
Deep learning models are increasingly used in safety-critical applications.
For simple images, such as low-resolution face portraits, visual counterfactual explanations has recently been proposed.
We propose a new generative counterfactual explanation framework that produces plausible and sparse modifications.
arXiv Detail & Related papers (2021-11-17T13:20:29Z) - PixelPyramids: Exact Inference Models from Lossless Image Pyramids [58.949070311990916]
Pixel-Pyramids is a block-autoregressive approach with scale-specific representations to encode the joint distribution of image pixels.
It yields state-of-the-art results for density estimation on various image datasets, especially for high-resolution data.
For CelebA-HQ 1024 x 1024, we observe that the density estimates are improved to 44% of the baseline despite sampling speeds superior even to easily parallelizable flow-based models.
arXiv Detail & Related papers (2021-10-17T10:47:29Z) - Rational Shapley Values [0.0]
Most popular tools for post-hoc explainable artificial intelligence (XAI) are either insensitive to context or difficult to summarize.
I introduce emphrational Shapley values, a novel XAI method that synthesizes and extends these seemingly incompatible approaches.
I leverage tools from decision theory and causal modeling to formalize and implement a pragmatic approach that resolves a number of known challenges in XAI.
arXiv Detail & Related papers (2021-06-18T15:45:21Z) - Fast Hierarchical Games for Image Explanations [78.16853337149871]
We present a model-agnostic explanation method for image classification based on a hierarchical extension of Shapley coefficients.
Unlike other Shapley-based explanation methods, h-Shap is scalable and can be computed without the need of approximation.
We compare our hierarchical approach with popular Shapley-based and non-Shapley-based methods on a synthetic dataset, a medical imaging scenario, and a general computer vision problem.
arXiv Detail & Related papers (2021-04-13T13:11:02Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Human-interpretable model explainability on high-dimensional data [8.574682463936007]
We introduce a framework for human-interpretable explainability on high-dimensional data, consisting of two modules.
First, we apply a semantically meaningful latent representation, both to reduce the raw dimensionality of the data, and to ensure its human interpretability.
Second, we adapt the Shapley paradigm for model-agnostic explainability to operate on these latent features. This leads to interpretable model explanations that are both theoretically controlled and computationally tractable.
arXiv Detail & Related papers (2020-10-14T20:06:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.