CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations
- URL: http://arxiv.org/abs/2406.01649v1
- Date: Mon, 3 Jun 2024 14:27:46 GMT
- Title: CoLa-DCE -- Concept-guided Latent Diffusion Counterfactual Explanations
- Authors: Franz Motzkus, Christian Hellert, Ute Schmid,
- Abstract summary: We introduce Concept-guided Latent Diffusion Counterfactual Explanations (CoLa-DCE)
CoLa-DCE generates concept-guided counterfactuals for any classifier with a high degree of control regarding concept selection and spatial conditioning.
We demonstrate the advantages of our approach in minimality and comprehenibility across multiple image classification models and datasets.
- Score: 2.3083192626377755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advancements in generative AI have introduced novel prospects and practical implementations. Especially diffusion models show their strength in generating diverse and, at the same time, realistic features, positioning them well for generating counterfactual explanations for computer vision models. Answering "what if" questions of what needs to change to make an image classifier change its prediction, counterfactual explanations align well with human understanding and consequently help in making model behavior more comprehensible. Current methods succeed in generating authentic counterfactuals, but lack transparency as feature changes are not directly perceivable. To address this limitation, we introduce Concept-guided Latent Diffusion Counterfactual Explanations (CoLa-DCE). CoLa-DCE generates concept-guided counterfactuals for any classifier with a high degree of control regarding concept selection and spatial conditioning. The counterfactuals comprise an increased granularity through minimal feature changes. The reference feature visualization ensures better comprehensibility, while the feature localization provides increased transparency of "where" changed "what". We demonstrate the advantages of our approach in minimality and comprehensibility across multiple image classification models and datasets and provide insights into how our CoLa-DCE explanations help comprehend model errors like misclassification cases.
Related papers
- Counterfactual Concept Bottleneck Models [12.912611528244858]
Current deep learning models are not designed to simultaneously address three fundamental questions.
We introduce CounterFactual Concept Bottleneck Models (CF-CBMs)
CF-CBMs achieve classification accuracy comparable to black-box models.
We show that training the counterfactual generator jointly with the CBM leads to two key improvements.
arXiv Detail & Related papers (2024-02-02T13:42:12Z) - Bridging Generative and Discriminative Models for Unified Visual
Perception with Diffusion Priors [56.82596340418697]
We propose a simple yet effective framework comprising a pre-trained Stable Diffusion (SD) model containing rich generative priors, a unified head (U-head) capable of integrating hierarchical representations, and an adapted expert providing discriminative priors.
Comprehensive investigations unveil potential characteristics of Vermouth, such as varying granularity of perception concealed in latent variables at distinct time steps and various U-net stages.
The promising results demonstrate the potential of diffusion models as formidable learners, establishing their significance in furnishing informative and robust visual representations.
arXiv Detail & Related papers (2024-01-29T10:36:57Z) - Auxiliary Losses for Learning Generalizable Concept-based Models [5.4066453042367435]
Concept Bottleneck Models (CBMs) have gained popularity since their introduction.
CBMs essentially limit the latent space of a model to human-understandable high-level concepts.
We propose cooperative-Concept Bottleneck Model (coop-CBM) to overcome the performance trade-off.
arXiv Detail & Related papers (2023-11-18T15:50:07Z) - Latent Diffusion Counterfactual Explanations [28.574246724214962]
We introduce Latent Diffusion Counterfactual Explanations (LDCE)
LDCE harnesses the capabilities of recent class- or text-conditional foundation latent diffusion models to expedite counterfactual generation.
We show how LDCE can provide insights into model errors, enhancing our understanding of black-box model behavior.
arXiv Detail & Related papers (2023-10-10T14:42:34Z) - Motif-guided Time Series Counterfactual Explanations [1.1510009152620664]
We propose a novel model that generates intuitive post-hoc counterfactual explanations.
We validated our model using five real-world time-series datasets from the UCR repository.
arXiv Detail & Related papers (2022-11-08T17:56:50Z) - Diffusion Visual Counterfactual Explanations [51.077318228247925]
Visual Counterfactual Explanations (VCEs) are an important tool to understand the decisions of an image.
Current approaches for the generation of VCEs are restricted to adversarially robust models and often contain non-realistic artefacts.
In this paper, we overcome this by generating Visual Diffusion Counterfactual Explanations (DVCEs) for arbitrary ImageNet classifiers.
arXiv Detail & Related papers (2022-10-21T09:35:47Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - Designing Counterfactual Generators using Deep Model Inversion [31.1607056675927]
We develop a deep inversion approach to generate counterfactual explanations for a given query image.
We find that, in addition to producing visually meaningful explanations, the counterfactuals from DISC are effective at learning decision boundaries and are robust to unknown test-time corruptions.
arXiv Detail & Related papers (2021-09-29T08:40:50Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Generative Counterfactuals for Neural Networks via Attribute-Informed
Perturbation [51.29486247405601]
We design a framework to generate counterfactuals for raw data instances with the proposed Attribute-Informed Perturbation (AIP)
By utilizing generative models conditioned with different attributes, counterfactuals with desired labels can be obtained effectively and efficiently.
Experimental results on real-world texts and images demonstrate the effectiveness, sample quality as well as efficiency of our designed framework.
arXiv Detail & Related papers (2021-01-18T08:37:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.