CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
- URL: http://arxiv.org/abs/2102.03322v1
- Date: Fri, 5 Feb 2021 17:58:14 GMT
- Title: CF-GNNExplainer: Counterfactual Explanations for Graph Neural Networks
- Authors: Ana Lucic, Maartje ter Hoeve, Gabriele Tolomei, Maarten de Rijke,
Fabrizio Silvestri
- Abstract summary: Graph neural networks (GNNs) have shown increasing promise in real-world applications.
We propose CF-GNNExplainer: the first method for generating counterfactual explanations for GNNs.
- Score: 40.47070962945751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have shown increasing promise in real-world
applications, which has caused an increased interest in understanding their
predictions. However, existing methods for explaining predictions from GNNs do
not provide an opportunity for recourse: given a prediction for a particular
instance, we want to understand how the prediction can be changed. We propose
CF-GNNExplainer: the first method for generating counterfactual explanations
for GNNs, i.e., the minimal perturbations to the input graph data such that the
prediction changes. Using only edge deletions, we find that we are able to
generate counterfactual examples for the majority of instances across three
widely used datasets for GNN explanations, while removing less than 3 edges on
average, with at least 94% accuracy. This indicates that CF-GNNExplainer
primarily removes edges that are crucial for the original predictions,
resulting in minimal counterfactual examples.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Explaining GNN over Evolving Graphs using Information Flow [12.33508497537769]
Graph neural networks (GNN) are the current state-of-the-art for these applications, and yet remain obscure to humans.
We propose an axiomatic attribution method to uniquely decompose the change in a prediction to paths on computation graphs.
We formulate a novel convex optimization problem to optimally select the paths that explain the prediction evolution.
arXiv Detail & Related papers (2021-11-19T04:29:38Z) - Robust Counterfactual Explanations on Graph Neural Networks [42.91881080506145]
Massive deployment of Graph Neural Networks (GNNs) in high-stake applications generates a strong demand for explanations that are robust to noise.
Most existing methods generate explanations by identifying a subgraph of an input graph that has a strong correlation with the prediction.
We propose a novel method to generate robust counterfactual explanations on GNNs by explicitly modelling the common decision logic of GNNs on similar input graphs.
arXiv Detail & Related papers (2021-07-08T19:50:00Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.