Global Counterfactual Explainer for Graph Neural Networks
- URL: http://arxiv.org/abs/2210.11695v1
- Date: Fri, 21 Oct 2022 02:46:35 GMT
- Title: Global Counterfactual Explainer for Graph Neural Networks
- Authors: Mert Kosan, Zexi Huang, Sourav Medya, Sayan Ranu and Ambuj Singh
- Abstract summary: Graph neural networks (GNNs) find applications in various domains such as computational biology, natural language processing, and computer security.
There is an increasing need to explain GNN predictions since GNNs are black-box machine learning models.
Existing methods for counterfactual explanation of GNNs are limited to instance-specific local reasoning.
- Score: 8.243944711755617
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) find applications in various domains such as
computational biology, natural language processing, and computer security.
Owing to their popularity, there is an increasing need to explain GNN
predictions since GNNs are black-box machine learning models. One way to
address this is counterfactual reasoning where the objective is to change the
GNN prediction by minimal changes in the input graph. Existing methods for
counterfactual explanation of GNNs are limited to instance-specific local
reasoning. This approach has two major limitations of not being able to offer
global recourse policies and overloading human cognitive ability with too much
information. In this work, we study the global explainability of GNNs through
global counterfactual reasoning. Specifically, we want to find a small set of
representative counterfactual graphs that explains all input graphs. Towards
this goal, we propose GCFExplainer, a novel algorithm powered by
vertex-reinforced random walks on an edit map of graphs with a greedy summary.
Extensive experiments on real graph datasets show that the global explanation
from GCFExplainer provides important high-level insights of the model behavior
and achieves a 46.9% gain in recourse coverage and a 9.5% reduction in recourse
cost compared to the state-of-the-art local counterfactual explainers.
Related papers
- Global Graph Counterfactual Explanation: A Subgraph Mapping Approach [54.42907350881448]
Graph Neural Networks (GNNs) have been widely deployed in various real-world applications.
Counterfactual explanation aims to find minimum perturbations on input graphs that change the GNN predictions.
We propose GlobalGCE, a novel global-level graph counterfactual explanation method.
arXiv Detail & Related papers (2024-10-25T21:39:05Z) - Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - GNNInterpreter: A Probabilistic Generative Model-Level Explanation for
Graph Neural Networks [25.94529851210956]
We propose a model-agnostic model-level explanation method for different Graph Neural Networks (GNNs) that follow the message passing scheme, GNNInterpreter.
GNNInterpreter learns a probabilistic generative graph distribution that produces the most discriminative graph pattern the GNN tries to detect.
Compared to existing works, GNNInterpreter is more flexible and computationally efficient in generating explanation graphs with different types of node and edge features.
arXiv Detail & Related papers (2022-09-15T07:45:35Z) - MentorGNN: Deriving Curriculum for Pre-Training GNNs [61.97574489259085]
We propose an end-to-end model named MentorGNN that aims to supervise the pre-training process of GNNs across graphs.
We shed new light on the problem of domain adaption on relational data (i.e., graphs) by deriving a natural and interpretable upper bound on the generalization error of the pre-trained GNNs.
arXiv Detail & Related papers (2022-08-21T15:12:08Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.