Parameterized Explainer for Graph Neural Network
- URL: http://arxiv.org/abs/2011.04573v1
- Date: Mon, 9 Nov 2020 17:15:03 GMT
- Title: Parameterized Explainer for Graph Neural Network
- Authors: Dongsheng Luo, Wei Cheng, Dongkuan Xu, Wenchao Yu, Bo Zong, Haifeng
Chen, Xiang Zhang
- Abstract summary: We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
- Score: 49.79917262156429
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite recent progress in Graph Neural Networks (GNNs), explaining
predictions made by GNNs remains a challenging open problem. The leading method
independently addresses the local explanations (i.e., important subgraph
structure and node features) to interpret why a GNN model makes the prediction
for a single instance, e.g. a node or a graph. As a result, the explanation
generated is painstakingly customized for each instance. The unique explanation
interpreting each instance independently is not sufficient to provide a global
understanding of the learned GNN model, leading to a lack of generalizability
and hindering it from being used in the inductive setting. Besides, as it is
designed for explaining a single instance, it is challenging to explain a set
of instances naturally (e.g., graphs of a given class). In this study, we
address these key challenges and propose PGExplainer, a parameterized explainer
for GNNs. PGExplainer adopts a deep neural network to parameterize the
generation process of explanations, which enables PGExplainer a natural
approach to explaining multiple instances collectively. Compared to the
existing work, PGExplainer has better generalization ability and can be
utilized in an inductive setting easily. Experiments on both synthetic and
real-life datasets show highly competitive performance with up to 24.7\%
relative improvement in AUC on explaining graph classification over the leading
baseline.
Related papers
- Explainable Graph Neural Networks Under Fire [69.15708723429307]
Graph neural networks (GNNs) usually lack interpretability due to their complex computational behavior and the abstract nature of graphs.
Most GNN explanation methods work in a post-hoc manner and provide explanations in the form of a small subset of important edges and/or nodes.
In this paper we demonstrate that these explanations can unfortunately not be trusted, as common GNN explanation methods turn out to be highly susceptible to adversarial perturbations.
arXiv Detail & Related papers (2024-06-10T16:09:16Z) - View-based Explanations for Graph Neural Networks [27.19300566616961]
We propose GVEX, a novel paradigm that generates Graph Views for EXplanation.
We show that this strategy provides an approximation ratio of 1/2.
Our second algorithm performs a single-pass to an input node stream in batches to incrementally maintain explanation views.
arXiv Detail & Related papers (2024-01-04T06:20:24Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.