Self-Explainable Graph Neural Networks for Link Prediction
- URL: http://arxiv.org/abs/2305.12578v1
- Date: Sun, 21 May 2023 21:57:32 GMT
- Title: Self-Explainable Graph Neural Networks for Link Prediction
- Authors: Huaisheng Zhu, Dongsheng Luo, Xianfeng Tang, Junjie Xu, Hui Liu,
Suhang Wang
- Abstract summary: Graph Neural Networks (GNNs) have achieved state-of-the-art performance for link prediction.
GNNs suffer from poor interpretability, which limits their adoptions in critical scenarios.
We propose a new framework and it can find various $K$ important neighbors of one node to learn pair-specific representations for links from this node to other nodes.
- Score: 30.41648521030615
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) have achieved state-of-the-art performance for
link prediction. However, GNNs suffer from poor interpretability, which limits
their adoptions in critical scenarios that require knowing why certain links
are predicted. Despite various methods proposed for the explainability of GNNs,
most of them are post-hoc explainers developed for explaining node
classification. Directly adopting existing post-hoc explainers for explaining
link prediction is sub-optimal because: (i) post-hoc explainers usually adopt
another strategy or model to explain a target model, which could misinterpret
the target model; and (ii) GNN explainers for node classification identify
crucial subgraphs around each node for the explanation; while for link
prediction, one needs to explain the prediction for each pair of nodes based on
graph structure and node attributes. Therefore, in this paper, we study a novel
problem of self-explainable GNNs for link prediction, which can simultaneously
give accurate predictions and explanations. Concretely, we propose a new
framework and it can find various $K$ important neighbors of one node to learn
pair-specific representations for links from this node to other nodes. These
$K$ different neighbors represent important characteristics of the node and
model various factors for links from it. Thus, $K$ neighbors can provide
explanations for the existence of links. Experiments on both synthetic and
real-world datasets verify the effectiveness of the proposed framework for link
prediction and explanation.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - GANExplainer: GAN-based Graph Neural Networks Explainer [5.641321839562139]
It is critical to explain why graph neural network (GNN) makes particular predictions for them to be believed in many applications.
We propose GANExplainer, based on Generative Adversarial Network (GAN) architecture.
GANExplainer improves explanation accuracy by up to 35% compared to its alternatives.
arXiv Detail & Related papers (2022-12-30T23:11:24Z) - Towards Prototype-Based Self-Explainable Graph Neural Network [37.90997236795843]
We study a novel problem of learning prototype-based self-explainable GNNs that can simultaneously give accurate predictions and prototype-based explanations on predictions.
The learned prototypes are also used to simultaneously make prediction for for a test instance and provide instance-level explanation.
arXiv Detail & Related papers (2022-10-05T00:47:42Z) - On Consistency in Graph Neural Network Interpretation [34.25952902469481]
Instance-level GNN explanation aims to discover critical input elements, like nodes or edges, that the target GNN relies upon for making predictions.
Various algorithms are proposed, but most of them formalize this task by searching the minimal subgraph.
We propose a simple yet effective countermeasure by aligning embeddings.
arXiv Detail & Related papers (2022-05-27T02:58:07Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Interpreting Graph Neural Networks for NLP With Differentiable Edge
Masking [63.49779304362376]
Graph neural networks (GNNs) have become a popular approach to integrating structural inductive biases into NLP models.
We introduce a post-hoc method for interpreting the predictions of GNNs which identifies unnecessary edges.
We show that we can drop a large proportion of edges without deteriorating the performance of the model.
arXiv Detail & Related papers (2020-10-01T17:51:19Z) - Bilinear Graph Neural Network with Neighbor Interactions [106.80781016591577]
Graph Neural Network (GNN) is a powerful model to learn representations and make predictions on graph data.
We propose a new graph convolution operator, which augments the weighted sum with pairwise interactions of the representations of neighbor nodes.
We term this framework as Bilinear Graph Neural Network (BGNN), which improves GNN representation ability with bilinear interactions between neighbor nodes.
arXiv Detail & Related papers (2020-02-10T06:43:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.