Calculating and Visualizing Counterfactual Feature Importance Values
- URL: http://arxiv.org/abs/2306.06506v1
- Date: Sat, 10 Jun 2023 18:54:15 GMT
- Title: Calculating and Visualizing Counterfactual Feature Importance Values
- Authors: Bjorge Meulemeester, Raphael Mazzine Barbosa De Oliveira, David
Martens
- Abstract summary: Counterfactual explanations surged as one potential solution to explain individual decision results.
Two major drawbacks directly impact their usability: (1) the isonomic view of feature changes, in which it is not possible to observe textithow much each modified feature influences the prediction, and (2) the lack of graphical resources to visualize the counterfactual explanation.
We introduce Counterfactual Feature (change) Importance Constellation (CFI) values as a solution: a way of assigning an importance value to each feature change in a given counterfactual explanation.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the success of complex machine learning algorithms, mostly justified
by an outstanding performance in prediction tasks, their inherent opaque nature
still represents a challenge to their responsible application. Counterfactual
explanations surged as one potential solution to explain individual decision
results. However, two major drawbacks directly impact their usability: (1) the
isonomic view of feature changes, in which it is not possible to observe
\textit{how much} each modified feature influences the prediction, and (2) the
lack of graphical resources to visualize the counterfactual explanation. We
introduce Counterfactual Feature (change) Importance (CFI) values as a
solution: a way of assigning an importance value to each feature change in a
given counterfactual explanation. To calculate these values, we propose two
potential CFI methods. One is simple, fast, and has a greedy nature. The other,
coined CounterShapley, provides a way to calculate Shapley values between the
factual-counterfactual pair. Using these importance values, we additionally
introduce three chart types to visualize the counterfactual explanations: (a)
the Greedy chart, which shows a greedy sequential path for prediction score
increase up to predicted class change, (b) the CounterShapley chart, depicting
its respective score in a simple and one-dimensional chart, and finally (c) the
Constellation chart, which shows all possible combinations of feature changes,
and their impact on the model's prediction score. For each of our proposed CFI
methods and visualization schemes, we show how they can provide more
information on counterfactual explanations. Finally, an open-source
implementation is offered, compatible with any counterfactual explanation
generator algorithm. Code repository at:
https://github.com/ADMAntwerp/CounterPlots
Related papers
- Less is More: One-shot Subgraph Reasoning on Large-scale Knowledge Graphs [49.547988001231424]
We propose the one-shot-subgraph link prediction to achieve efficient and adaptive prediction.
Design principle is that, instead of directly acting on the whole KG, the prediction procedure is decoupled into two steps.
We achieve promoted efficiency and leading performances on five large-scale benchmarks.
arXiv Detail & Related papers (2024-03-15T12:00:12Z) - Adapting to Change: Robust Counterfactual Explanations in Dynamic Data
Landscapes [9.943459106509687]
We introduce a novel semi-supervised Graph Counterfactual Explainer (GCE) methodology, Dynamic GRAph Counterfactual Explainer (DyGRACE)
It leverages initial knowledge about the data distribution to search for valid counterfactuals while avoiding using information from potentially outdated decision functions in subsequent time steps.
DyGRACE is quite effective and can act as a drift detector, identifying distributional drift based on differences in reconstruction errors between iterations.
arXiv Detail & Related papers (2023-08-04T14:41:03Z) - VCNet: A self-explaining model for realistic counterfactual generation [52.77024349608834]
Counterfactual explanation is a class of methods to make local explanations of machine learning decisions.
We present VCNet-Variational Counter Net, a model architecture that combines a predictor and a counterfactual generator.
We show that VCNet is able to both generate predictions, and to generate counterfactual explanations without having to solve another minimisation problem.
arXiv Detail & Related papers (2022-12-21T08:45:32Z) - Counterfactual Explanations for Support Vector Machine Models [1.933681537640272]
We show how to find counterfactual explanations with the purpose of increasing model interpretability.
We also build a support vector machine model to predict whether law students will pass the Bar exam using protected features.
arXiv Detail & Related papers (2022-12-14T17:13:22Z) - CLEAR: Generative Counterfactual Explanations on Graphs [60.30009215290265]
We study the problem of counterfactual explanation generation on graphs.
A few studies have explored counterfactual explanations on graphs, but many challenges of this problem are still not well-addressed.
We propose a novel framework CLEAR which aims to generate counterfactual explanations on graphs for graph-level prediction models.
arXiv Detail & Related papers (2022-10-16T04:35:32Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Unifying Feature Attribution and Counterfactual Explanations:
Different Means to the Same End [17.226134854746267]
We present a method to generate feature attribution explanations from a set of counterfactual examples.
We show how counterfactual examples can be used to evaluate the goodness of an attribution-based explanation in terms of its necessity and sufficiency.
arXiv Detail & Related papers (2020-11-10T05:41:43Z) - Shapley Flow: A Graph-based Approach to Interpreting Model Predictions [12.601158020289105]
Shapley Flow is a novel approach to interpreting machine learning models.
It considers the entire causal graph, and assigns credit to textitedges instead of treating nodes as the fundamental unit of credit assignment.
arXiv Detail & Related papers (2020-10-27T20:21:00Z) - Gravitational Models Explain Shifts on Human Visual Attention [80.76475913429357]
Visual attention refers to the human brain's ability to select relevant sensory information for preferential processing.
Various methods to estimate saliency have been proposed in the last three decades.
We propose a gravitational model (GRAV) to describe the attentional shifts.
arXiv Detail & Related papers (2020-09-15T10:12:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.