Good Counterfactuals and Where to Find Them: A Case-Based Technique for
Generating Counterfactuals for Explainable AI (XAI)
- URL: http://arxiv.org/abs/2005.13997v1
- Date: Tue, 26 May 2020 14:05:10 GMT
- Title: Good Counterfactuals and Where to Find Them: A Case-Based Technique for
Generating Counterfactuals for Explainable AI (XAI)
- Authors: Mark T. Keane, Barry Smyth
- Abstract summary: We show that many commonly-used datasets appear to have few good counterfactuals for explanation purposes.
We propose a new case based approach for generating counterfactuals using novel ideas about the counterfactual potential and explanatory coverage of a case-base.
- Score: 18.45278329799526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, a groundswell of research has identified the use of counterfactual
explanations as a potentially significant solution to the Explainable AI (XAI)
problem. It is argued that (a) technically, these counterfactual cases can be
generated by permuting problem-features until a class change is found, (b)
psychologically, they are much more causally informative than factual
explanations, (c) legally, they are GDPR-compliant. However, there are issues
around the finding of good counterfactuals using current techniques (e.g.
sparsity and plausibility). We show that many commonly-used datasets appear to
have few good counterfactuals for explanation purposes. So, we propose a new
case based approach for generating counterfactuals using novel ideas about the
counterfactual potential and explanatory coverage of a case-base. The new
technique reuses patterns of good counterfactuals, present in a case-base, to
generate analogous counterfactuals that can explain new problems and their
solutions. Several experiments show how this technique can improve the
counterfactual potential and explanatory coverage of case-bases that were
previously found wanting.
Related papers
- AR-Pro: Counterfactual Explanations for Anomaly Repair with Formal Properties [12.71326587869053]
Anomaly detection is widely used for identifying critical errors and suspicious behaviors, but current methods lack interpretability.
We leverage common properties of existing methods to introduce counterfactual explanations for anomaly detection.
A key advantage of this approach is that it enables a domain-independent formal specification of explainability desiderata.
arXiv Detail & Related papers (2024-10-31T17:43:53Z) - Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Longitudinal Counterfactuals: Constraints and Opportunities [59.11233767208572]
We propose using longitudinal data to assess and improve plausibility in counterfactuals.
We develop a metric that compares longitudinal differences to counterfactual differences, allowing us to evaluate how similar a counterfactual is to prior observed changes.
arXiv Detail & Related papers (2024-02-29T20:17:08Z) - Deep Backtracking Counterfactuals for Causally Compliant Explanations [57.94160431716524]
We introduce a practical method called deep backtracking counterfactuals (DeepBC) for computing backtracking counterfactuals in structural causal models.
As a special case, our formulation reduces to methods in the field of counterfactual explanations.
arXiv Detail & Related papers (2023-10-11T17:11:10Z) - Rethinking Complex Queries on Knowledge Graphs with Neural Link Predictors [58.340159346749964]
We propose a new neural-symbolic method to support end-to-end learning using complex queries with provable reasoning capability.
We develop a new dataset containing ten new types of queries with features that have never been considered.
Our method outperforms previous methods significantly in the new dataset and also surpasses previous methods in the existing dataset at the same time.
arXiv Detail & Related papers (2023-04-14T11:35:35Z) - Neural Causal Models for Counterfactual Identification and Estimation [62.30444687707919]
We study the evaluation of counterfactual statements through neural models.
First, we show that neural causal models (NCMs) are expressive enough.
Second, we develop an algorithm for simultaneously identifying and estimating counterfactual distributions.
arXiv Detail & Related papers (2022-09-30T18:29:09Z) - Principled Knowledge Extrapolation with GANs [92.62635018136476]
We study counterfactual synthesis from a new perspective of knowledge extrapolation.
We show that an adversarial game with a closed-form discriminator can be used to address the knowledge extrapolation problem.
Our method enjoys both elegant theoretical guarantees and superior performance in many scenarios.
arXiv Detail & Related papers (2022-05-21T08:39:42Z) - DISSECT: Disentangled Simultaneous Explanations via Concept Traversals [33.65478845353047]
DISSECT is a novel approach to explaining deep learning model inferences.
By training a generative model from a classifier's signal, DISSECT offers a way to discover a classifier's inherent "notion" of distinct concepts.
We show that DISSECT produces CTs that disentangle several concepts and are coupled to its reasoning due to joint training.
arXiv Detail & Related papers (2021-05-31T17:11:56Z) - A Few Good Counterfactuals: Generating Interpretable, Plausible and
Diverse Counterfactual Explanations [14.283774141604997]
Good, native counterfactuals have been shown to rarely occur in most datasets.
Most popular methods generate synthetic counterfactuals using blind perturbations.
We describe a method that adapts native counterfactuals in the original dataset to generate sparse, diverse synthetic counterfactuals.
arXiv Detail & Related papers (2021-01-22T11:30:26Z) - On Generating Plausible Counterfactual and Semi-Factual Explanations for
Deep Learning [15.965337956587373]
PlausIble Exceptionality-based Contrastive Explanations (PIECE), modifies all exceptional features in a test image to be normal from the perspective of the counterfactual class.
Two controlled experiments compare PIECE to others in the literature, showing that PIECE not only generates the most plausible counterfactuals on several measures, but also the best semifactuals.
arXiv Detail & Related papers (2020-09-10T14:48:12Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.