On Generating Plausible Counterfactual and Semi-Factual Explanations for
Deep Learning
- URL: http://arxiv.org/abs/2009.06399v1
- Date: Thu, 10 Sep 2020 14:48:12 GMT
- Title: On Generating Plausible Counterfactual and Semi-Factual Explanations for
Deep Learning
- Authors: Eoin M. Kenny and Mark T. Keane
- Abstract summary: PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class.
Two controlled experiments compare PIECE to others in the literature, showing that PIECE not only generates the most plausible counterfactuals on several measures, but also the best semifactuals.
- Score: 15.965337956587373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is a growing concern that the recent progress made in AI, especially
regarding the predictive competence of deep learning models, will be undermined
by a failure to properly explain their operation and outputs. In response to
this disquiet counterfactual explanations have become massively popular in
eXplainable AI (XAI) due to their proposed computational psychological, and
legal benefits. In contrast however, semifactuals, which are a similar way
humans commonly explain their reasoning, have surprisingly received no
attention. Most counterfactual methods address tabular rather than image data,
partly due to the nondiscrete nature of the latter making good counterfactuals
difficult to define. Additionally generating plausible looking explanations
which lie on the data manifold is another issue which hampers progress. This
paper advances a novel method for generating plausible counterfactuals (and
semifactuals) for black box CNN classifiers doing computer vision. The present
method, called PlausIble Exceptionality-based Contrastive Explanations (PIECE),
modifies all exceptional features in a test image to be normal from the
perspective of the counterfactual class (hence concretely defining a
counterfactual). Two controlled experiments compare this method to others in
the literature, showing that PIECE not only generates the most plausible
counterfactuals on several measures, but also the best semifactuals.
Related papers
- Explaining Predictive Uncertainty by Exposing Second-Order Effects [13.83164409095901]
We present a new method for explaining predictive uncertainty based on second-order effects.
Our method is generally applicable, allowing for turning common attribution techniques into powerful second-order uncertainty explainers.
arXiv Detail & Related papers (2024-01-30T21:02:21Z) - Deep Backtracking Counterfactuals for Causally Compliant Explanations [57.94160431716524]
We introduce a practical method called deep backtracking counterfactuals (DeepBC) for computing backtracking counterfactuals in structural causal models.
As a special case, our formulation reduces to methods in the field of counterfactual explanations.
arXiv Detail & Related papers (2023-10-11T17:11:10Z) - Disagreement amongst counterfactual explanations: How transparency can
be deceptive [0.0]
Counterfactual explanations are increasingly used as Explainable Artificial Intelligence technique.
Not every algorithm creates uniform explanations for the same instance.
Ethical issues arise when malicious agents use this diversity to fairwash an unfair machine learning model.
arXiv Detail & Related papers (2023-04-25T09:15:37Z) - Counterfactual Explanations for Misclassified Images: How Human and
Machine Explanations Differ [11.508304497344637]
Counterfactual explanations have emerged as a popular solution for the eXplainable AI (XAI) problem of elucidating predictions of black-box deep-learning systems.
While over 100 counterfactual methods exist, claiming to generate plausible explanations akin to those preferred by people, few have actually been tested on users.
This issue is addressed here using a novel methodology that gathers ground truth human-generated counterfactual explanations for misclassified images.
arXiv Detail & Related papers (2022-12-16T22:05:38Z) - Visual Abductive Reasoning [85.17040703205608]
Abductive reasoning seeks the likeliest possible explanation for partial observations.
We propose a new task and dataset, Visual Abductive Reasoning ( VAR), for examining abductive reasoning ability of machine intelligence in everyday visual situations.
arXiv Detail & Related papers (2022-03-26T10:17:03Z) - Fooling Partial Dependence via Data Poisoning [3.0036519884678894]
We present techniques for attacking Partial Dependence (plots, profiles, PDP)
We showcase that PD can be manipulated in an adversarial manner, which is alarming, especially in financial or medical applications.
The fooling is performed via poisoning the data to bend and shift explanations in the desired direction using genetic gradient and algorithms.
arXiv Detail & Related papers (2021-05-26T20:58:04Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Causal Expectation-Maximisation [70.45873402967297]
We show that causal inference is NP-hard even in models characterised by polytree-shaped graphs.
We introduce the causal EM algorithm to reconstruct the uncertainty about the latent variables from data about categorical manifest variables.
We argue that there appears to be an unnoticed limitation to the trending idea that counterfactual bounds can often be computed without knowledge of the structural equations.
arXiv Detail & Related papers (2020-11-04T10:25:13Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.