This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning
- URL: http://arxiv.org/abs/2012.11905v1
- Date: Tue, 22 Dec 2020 10:08:05 GMT
- Title: This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning
- Authors: Silvan Mertes, Tobias Huber, Katharina Weitz, Alexander Heimerl,
Elisabeth Andr\'e
- Abstract summary: counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
- Score: 59.17685450892182
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: With the ongoing rise of machine learning, the need for methods for
explaining decisions made by artificial intelligence systems is becoming a more
and more important topic. Especially for image classification tasks, many
state-of-the-art tools to explain such classifiers rely on visual highlighting
of important areas of the input data. Contrary, counterfactual explanation
systems try to enable a counterfactual reasoning by modifying the input image
in a way such that the classifier would have made a different prediction. By
doing so, the users of counterfactual explanation systems are equipped with a
completely different kind of explanatory information. However, methods for
generating realistic counterfactual explanations for image classifiers are
still rare. In this work, we present a novel approach to generate such
counterfactual image explanations based on adversarial image-to-image
translation techniques. Additionally, we conduct a user study to evaluate our
approach in a use case which was inspired by a healthcare scenario. Our results
show that our approach leads to significantly better results regarding mental
models, explanation satisfaction, trust, emotions, and self-efficacy than two
state-of-the art systems that work with saliency maps, namely LIME and LRP.
Related papers
- Relevant Irrelevance: Generating Alterfactual Explanations for Image Classifiers [11.200613814162185]
In this paper, we demonstrate the feasibility of alterfactual explanations for black box image classifiers.
We show for the first time that it is possible to apply this idea to black box models based on neural networks.
arXiv Detail & Related papers (2024-05-08T11:03:22Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - TExplain: Explaining Learned Visual Features via Pre-trained (Frozen) Language Models [14.019349267520541]
We propose a novel method that leverages the capabilities of language models to interpret the learned features of pre-trained image classifiers.
Our approach generates a vast number of sentences to explain the features learned by the classifier for a given image.
Our method, for the first time, utilizes these frequent words corresponding to a visual representation to provide insights into the decision-making process.
arXiv Detail & Related papers (2023-09-01T20:59:46Z) - Learning to Scaffold: Optimizing Model Explanations for Teaching [74.25464914078826]
We train models on three natural language processing and computer vision tasks.
We find that students trained with explanations extracted with our framework are able to simulate the teacher significantly more effectively than ones produced with previous methods.
arXiv Detail & Related papers (2022-04-22T16:43:39Z) - Making Heads or Tails: Towards Semantically Consistent Visual
Counterfactuals [31.375504774744268]
A visual counterfactual explanation replaces image regions in a query image with regions from a distractor image such that the system's decision on the transformed image changes to the distractor class.
We present a novel framework for computing visual counterfactual explanations based on two key ideas.
arXiv Detail & Related papers (2022-03-24T07:26:11Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - Explainers in the Wild: Making Surrogate Explainers Robust to
Distortions through Perception [77.34726150561087]
We propose a methodology to evaluate the effect of distortions in explanations by embedding perceptual distances.
We generate explanations for images in the Imagenet-C dataset and demonstrate how using a perceptual distances in the surrogate explainer creates more coherent explanations for the distorted and reference images.
arXiv Detail & Related papers (2021-02-22T12:38:53Z) - Deep Co-Attention Network for Multi-View Subspace Learning [73.3450258002607]
We propose a deep co-attention network for multi-view subspace learning.
It aims to extract both the common information and the complementary information in an adversarial setting.
In particular, it uses a novel cross reconstruction loss and leverages the label information to guide the construction of the latent representation.
arXiv Detail & Related papers (2021-02-15T18:46:44Z) - Combining Similarity and Adversarial Learning to Generate Visual
Explanation: Application to Medical Image Classification [0.0]
We leverage a learning framework to produce our visual explanations method.
Using metrics from the literature, our method outperforms state-of-the-art approaches.
We validate our approach on a large chest X-ray database.
arXiv Detail & Related papers (2020-12-14T08:34:12Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.