Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation
- URL: http://arxiv.org/abs/2210.11094v1
- Date: Thu, 20 Oct 2022 08:44:57 GMT
- Title: Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation
- Authors: Tien-Cuong Bui, Van-Duc Le, Wen-syan Li, Sang Kyun Cha
- Abstract summary: Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
- Score: 0.17842332554022688
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous
applications and systems, necessitating explanations of their predictions,
especially when making critical decisions. However, explaining GNNs is
challenging due to the complexity of graph data and model execution. Despite
additional computational costs, post-hoc explanation approaches have been
widely adopted due to the generality of their architectures. Intrinsically
interpretable models provide instant explanations but are usually
model-specific, which can only explain particular GNNs. Therefore, we propose a
novel GNN explanation framework named SCALE, which is general and fast for
explaining predictions. SCALE trains multiple specialty learners to explain
GNNs since constructing one powerful explainer to examine attributions of
interactions in input graphs is complicated. In training, a black-box GNN model
guides learners based on an online knowledge distillation paradigm. In the
explanation phase, explanations of predictions are provided by multiple
explainers corresponding to trained learners. Specifically, edge masking and
random walk with restart procedures are executed to provide structural
explanations for graph-level and node-level predictions, respectively. A
feature attribution module provides overall summaries and instance-level
feature contributions. We compare SCALE with state-of-the-art baselines via
quantitative and qualitative experiments to prove its explanation correctness
and execution performance. We also conduct a series of ablation studies to
understand the strengths and weaknesses of the proposed framework.
Related papers
- SES: Bridging the Gap Between Explainability and Prediction of Graph Neural Networks [13.655670509818144]
We propose a self-explained and self-supervised graph neural network (SES) to bridge the gap between explainability and prediction.
SES comprises two processes: explainable training and enhanced predictive learning.
arXiv Detail & Related papers (2024-07-16T03:46:57Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - INGREX: An Interactive Explanation Framework for Graph Neural Networks [0.17842332554022688]
Graph Neural Networks (GNNs) are widely used in many modern applications, necessitating explanations for their decisions.
We introduce INGREX, an interactive explanation framework for GNNs designed to aid users in comprehending model predictions.
arXiv Detail & Related papers (2022-11-03T01:47:33Z) - PGX: A Multi-level GNN Explanation Framework Based on Separate Knowledge
Distillation Processes [0.2005299372367689]
We propose a multi-level GNN explanation framework based on an observation that GNN is a multimodal learning process of multiple components in graph data.
The complexity of the original problem is relaxed by breaking into multiple sub-parts represented as a hierarchical structure.
We also aim for personalized explanations as the framework can generate different results based on user preferences.
arXiv Detail & Related papers (2022-08-05T10:14:48Z) - Reinforced Causal Explainer for Graph Neural Networks [112.57265240212001]
Explainability is crucial for probing graph neural networks (GNNs)
We propose a reinforcement learning agent, Reinforced Causal Explainer (RC-Explainer)
RC-Explainer generates faithful and concise explanations, and has a better power to unseen graphs.
arXiv Detail & Related papers (2022-04-23T09:13:25Z) - Task-Agnostic Graph Explanations [50.17442349253348]
Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
arXiv Detail & Related papers (2022-02-16T21:11:47Z) - Towards the Explanation of Graph Neural Networks in Digital Pathology
with Information Flows [67.23405590815602]
Graph Neural Networks (GNNs) are widely adopted in digital pathology.
Existing explainers discover an explanatory subgraph relevant to the prediction.
An explanatory subgraph should be not only necessary for prediction, but also sufficient to uncover the most predictive regions.
We propose IFEXPLAINER, which generates a necessary and sufficient explanation for GNNs.
arXiv Detail & Related papers (2021-12-18T10:19:01Z) - Generative Causal Explanations for Graph Neural Networks [39.60333255875979]
Gem is a model-agnostic approach for providing interpretable explanations for any GNNs on various graph learning tasks.
It achieves a relative increase of the explanation accuracy by up to $30%$ and speeds up the explanation process by up to $110times$ as compared to its state-of-the-art alternatives.
arXiv Detail & Related papers (2021-04-14T06:22:21Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.