On Consistency in Graph Neural Network Interpretation
- URL: http://arxiv.org/abs/2205.13733v1
- Date: Fri, 27 May 2022 02:58:07 GMT
- Title: On Consistency in Graph Neural Network Interpretation
- Authors: Tianxiang Zhao, Dongsheng Luo, Xiang Zhang, Suhang Wang
- Abstract summary: Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
- Score: 34.25952902469481
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Uncovering rationales behind predictions of graph neural networks (GNNs) has
received increasing attention over recent years. Instance-level GNN explanation
aims to discover critical input elements, like nodes or edges, that the target
GNN relies upon for making predictions. These identified sub-structures can
provide interpretations of GNN's behavior. Though various algorithms are
proposed, most of them formalize this task by searching the minimal subgraph
which can preserve original predictions. An inductive bias is deep-rooted in
this framework: the same output cannot guarantee that two inputs are processed
under the same rationale. Consequently, they have the danger of providing
spurious explanations and fail to provide consistent explanations. Applying
them to explain weakly-performed GNNs would further amplify these issues. To
address the issues, we propose to obtain more faithful and consistent
explanations of GNNs. After a close examination on predictions of GNNs from the
causality perspective, we attribute spurious explanations to two typical
reasons: confounding effect of latent variables like distribution shift, and
causal factors distinct from the original input. Motivated by the observation
that both confounding effects and diverse causal rationales are encoded in
internal representations, we propose a simple yet effective countermeasure by
aligning embeddings. This new objective can be incorporated into existing GNN
explanation algorithms with no effort. We implement both a simplified version
based on absolute distance and a distribution-aware version based on anchors.
Experiments on $5$ datasets validate its effectiveness, and theoretical
analysis shows that it is in effect optimizing a more faithful explanation
objective in design, which further justifies the proposed approach.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - Incorporating Retrieval-based Causal Learning with Information
Bottlenecks for Interpretable Graph Neural Networks [12.892400744247565]
We develop a novel interpretable causal GNN framework that incorporates retrieval-based causal learning with Graph Information Bottleneck (GIB) theory.
We achieve 32.71% higher precision on real-world explanation scenarios with diverse explanation types.
arXiv Detail & Related papers (2024-02-07T09:57:39Z) - Empowering Counterfactual Reasoning over Graph Neural Networks through
Inductivity [7.094238868711952]
Graph neural networks (GNNs) have various practical applications, such as drug discovery, recommendation engines, and chip design.
Counterfactual reasoning is used to make minimal changes to the input graph of a GNN in order to alter its prediction.
arXiv Detail & Related papers (2023-06-07T23:40:18Z) - Faithful and Consistent Graph Neural Network Explanations with Rationale
Alignment [38.66324833510402]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, most of them formalize this task by searching the minimal subgraph which can preserve original predictions.
Several subgraphs can result in the same or similar outputs as the original graphs.
Applying them to explain weakly-performed GNNs would further amplify these issues.
arXiv Detail & Related papers (2023-01-07T06:33:35Z) - On Structural Explanation of Bias in Graph Neural Networks [40.323880315453906]
Graph Neural Networks (GNNs) have shown satisfying performance in various graph analytical problems.
GNNs could yield biased results against certain demographic subgroups.
We study a novel research problem of structural explanation of bias in GNNs.
arXiv Detail & Related papers (2022-06-24T06:49:21Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Deconfounding to Explanation Evaluation in Graph Neural Networks [136.73451468551656]
We argue that a distribution shift exists between the full graph and the subgraph, causing the out-of-distribution problem.
We propose Deconfounded Subgraph Evaluation (DSE) which assesses the causal effect of an explanatory subgraph on the model prediction.
arXiv Detail & Related papers (2022-01-21T18:05:00Z) - ProtGNN: Towards Self-Explaining Graph Neural Networks [12.789013658551454]
We propose Prototype Graph Neural Network (ProtGNN), which combines prototype learning with GNNs.
ProtGNN and ProtGNN+ can provide inherent interpretability while achieving accuracy on par with the non-interpretable counterparts.
arXiv Detail & Related papers (2021-12-02T01:16:29Z) - Generalizing Graph Neural Networks on Out-Of-Distribution Graphs [51.33152272781324]
Graph Neural Networks (GNNs) are proposed without considering the distribution shifts between training and testing graphs.
In such a setting, GNNs tend to exploit subtle statistical correlations existing in the training set for predictions, even though it is a spurious correlation.
We propose a general causal representation framework, called StableGNN, to eliminate the impact of spurious correlations.
arXiv Detail & Related papers (2021-11-20T18:57:18Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.