OCTET: Object-aware Counterfactual Explanations
- URL: http://arxiv.org/abs/2211.12380v2
- Date: Fri, 24 Mar 2023 16:01:24 GMT
- Title: OCTET: Object-aware Counterfactual Explanations
- Authors: Mehdi Zemni, Micka\"el Chen, \'Eloi Zablocki, H\'edi Ben-Younes,
Patrick P\'erez, Matthieu Cord
- Abstract summary: We propose an object-centric framework for counterfactual explanation generation.
Our method, inspired by recent generative modeling works, encodes the query image into a latent space that is structured to ease object-level manipulations.
We conduct a set of experiments on counterfactual explanation benchmarks for driving scenes, and we show that our method can be adapted beyond classification.
- Score: 29.532969342297086
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Nowadays, deep vision models are being widely deployed in safety-critical
applications, e.g., autonomous driving, and explainability of such models is
becoming a pressing concern. Among explanation methods, counterfactual
explanations aim to find minimal and interpretable changes to the input image
that would also change the output of the model to be explained. Such
explanations point end-users at the main factors that impact the decision of
the model. However, previous methods struggle to explain decision models
trained on images with many objects, e.g., urban scenes, which are more
difficult to work with but also arguably more critical to explain. In this
work, we propose to tackle this issue with an object-centric framework for
counterfactual explanation generation. Our method, inspired by recent
generative modeling works, encodes the query image into a latent space that is
structured in a way to ease object-level manipulations. Doing so, it provides
the end-user with control over which search directions (e.g., spatial
displacement of objects, style modification, etc.) are to be explored during
the counterfactual generation. We conduct a set of experiments on
counterfactual explanation benchmarks for driving scenes, and we show that our
method can be adapted beyond classification, e.g., to explain semantic
segmentation models. To complete our analysis, we design and run a user study
that measures the usefulness of counterfactual explanations in understanding a
decision model. Code is available at https://github.com/valeoai/OCTET.
Related papers
- CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Learning with Explanation Constraints [91.23736536228485]
We provide a learning theoretic framework to analyze how explanations can improve the learning of our models.
We demonstrate the benefits of our approach over a large array of synthetic and real-world experiments.
arXiv Detail & Related papers (2023-03-25T15:06:47Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - STEEX: Steering Counterfactual Explanations with Semantics [28.771471624014065]
Deep learning models are increasingly used in safety-critical applications.
For simple images, such as low-resolution face portraits, visual counterfactual explanations has recently been proposed.
We propose a new generative counterfactual explanation framework that produces plausible and sparse modifications.
arXiv Detail & Related papers (2021-11-17T13:20:29Z) - LIMEcraft: Handcrafted superpixel selection and inspection for Visual
eXplanations [3.0036519884678894]
LIMEcraft allows a user to interactively select semantically consistent areas and thoroughly examine the prediction for the image instance.
Our method improves model safety by inspecting model fairness for image pieces that may indicate model bias.
arXiv Detail & Related papers (2021-11-15T21:40:34Z) - Counterfactual Explanations for Models of Code [11.678590247866534]
Machine learning (ML) models play an increasingly prevalent role in many software engineering tasks.
It can be difficult for developers to understand why the model came to a certain conclusion and how to act upon the model's prediction.
This paper explores counterfactual explanations for models of source code.
arXiv Detail & Related papers (2021-11-10T14:44:19Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Contrastive Explanations for Model Interpretability [77.92370750072831]
We propose a methodology to produce contrastive explanations for classification models.
Our method is based on projecting model representation to a latent space.
Our findings shed light on the ability of label-contrastive explanations to provide a more accurate and finer-grained interpretability of a model's decision.
arXiv Detail & Related papers (2021-03-02T00:36:45Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Right for the Right Concept: Revising Neuro-Symbolic Concepts by
Interacting with their Explanations [24.327862278556445]
We propose a Neuro-Symbolic scene representation, which allows one to revise the model on the semantic level.
The results of our experiments on CLEVR-Hans demonstrate that our semantic explanations can identify confounders.
arXiv Detail & Related papers (2020-11-25T16:23:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.