Achieving Diversity in Counterfactual Explanations: a Review and
Discussion
- URL: http://arxiv.org/abs/2305.05840v1
- Date: Wed, 10 May 2023 02:09:19 GMT
- Title: Achieving Diversity in Counterfactual Explanations: a Review and
Discussion
- Authors: Thibault Laugel, Adulam Jeyasothy, Marie-Jeanne Lesot, Christophe
Marsala, Marcin Detyniecki
- Abstract summary: In the field of Explainable Artificial Intelligence (XAI), counterfactual examples explain to a user the predictions of a trained decision model.
This paper proposes a review of the numerous, sometimes conflicting, definitions that have been proposed for this notion of diversity.
- Score: 3.6066164404432883
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the field of Explainable Artificial Intelligence (XAI), counterfactual
examples explain to a user the predictions of a trained decision model by
indicating the modifications to be made to the instance so as to change its
associated prediction. These counterfactual examples are generally defined as
solutions to an optimization problem whose cost function combines several
criteria that quantify desiderata for a good explanation meeting user needs. A
large variety of such appropriate properties can be considered, as the user
needs are generally unknown and differ from one user to another; their
selection and formalization is difficult. To circumvent this issue, several
approaches propose to generate, rather than a single one, a set of diverse
counterfactual examples to explain a prediction. This paper proposes a review
of the numerous, sometimes conflicting, definitions that have been proposed for
this notion of diversity. It discusses their underlying principles as well as
the hypotheses on the user needs they rely on and proposes to categorize them
along several dimensions (explicit vs implicit, universe in which they are
defined, level at which they apply), leading to the identification of further
research challenges on this topic.
Related papers
- Generating Counterfactual Explanations Using Cardinality Constraints [0.0]
We propose to explicitly add a cardinality constraint to counterfactual generation limiting how many features can be different from the original example.
This will provide more interpretable and easily understantable counterfactuals.
arXiv Detail & Related papers (2024-04-11T06:33:19Z) - A multi-criteria approach for selecting an explanation from the set of counterfactuals produced by an ensemble of explainers [4.239829789304117]
We propose to use a multi-stage ensemble approach that will select single counterfactual based on the multiple-criteria analysis.
The proposed approach generates fully actionable counterfactuals with attractive compromise values of the considered quality measures.
arXiv Detail & Related papers (2024-03-20T19:25:11Z) - Explaining $\mathcal{ELH}$ Concept Descriptions through Counterfactual
Reasoning [3.5323691899538128]
An intrinsically transparent way to do classification is by using concepts in description logics.
One solution is to employ counterfactuals to answer the question, How must feature values be changed to obtain a different classification?''
arXiv Detail & Related papers (2023-01-12T16:06:06Z) - An Additive Instance-Wise Approach to Multi-class Model Interpretation [53.87578024052922]
Interpretable machine learning offers insights into what factors drive a certain prediction of a black-box system.
Existing methods mainly focus on selecting explanatory input features, which follow either locally additive or instance-wise approaches.
This work exploits the strengths of both methods and proposes a global framework for learning local explanations simultaneously for multiple target classes.
arXiv Detail & Related papers (2022-07-07T06:50:27Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - A Variational Inference Approach to Inverse Problems with Gamma
Hyperpriors [60.489902135153415]
This paper introduces a variational iterative alternating scheme for hierarchical inverse problems with gamma hyperpriors.
The proposed variational inference approach yields accurate reconstruction, provides meaningful uncertainty quantification, and is easy to implement.
arXiv Detail & Related papers (2021-11-26T06:33:29Z) - Uncertainty Quantification of Surrogate Explanations: an Ordinal
Consensus Approach [1.3750624267664155]
We produce estimates of the uncertainty of a given explanation by measuring the consensus amongst a set of diverse bootstrapped surrogate explainers.
We empirically illustrate the properties of this approach through experiments on state-of-the-art Convolutional Neural Network ensembles.
arXiv Detail & Related papers (2021-11-17T13:55:58Z) - Toward Scalable and Unified Example-based Explanation and Outlier
Detection [128.23117182137418]
We argue for a broader adoption of prototype-based student networks capable of providing an example-based explanation for their prediction.
We show that our prototype-based networks beyond similarity kernels deliver meaningful explanations and promising outlier detection results without compromising classification accuracy.
arXiv Detail & Related papers (2020-11-11T05:58:17Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z) - Multi-Objective Counterfactual Explanations [0.7349727826230864]
We propose the Multi-Objective Counterfactuals (MOC) method, which translates the counterfactual search into a multi-objective optimization problem.
Our approach not only returns a diverse set of counterfactuals with different trade-offs between the proposed objectives, but also maintains diversity in feature space.
arXiv Detail & Related papers (2020-04-23T13:56:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.