Task-Agnostic Graph Explanations
- URL: http://arxiv.org/abs/2202.08335v1
- Date: Wed, 16 Feb 2022 21:11:47 GMT
- Title: Task-Agnostic Graph Explanations
- Authors: Yaochen Xie, Sumeet Katariya, Xianfeng Tang, Edward Huang, Nikhil Rao,
Karthik Subbian, Shuiwang Ji
- Abstract summary: Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph structured data.
Existing learning-based GNN explanation approaches are task-specific in training.
We propose a Task-Agnostic GNN Explainer (TAGE) trained under self-supervision with no knowledge of downstream tasks.
- Score: 50.17442349253348
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have emerged as powerful tools to encode graph
structured data. Due to their broad applications, there is an increasing need
to develop tools to explain how GNNs make decisions given graph structured
data. Existing learning-based GNN explanation approaches are task-specific in
training and hence suffer from crucial drawbacks. Specifically, they are
incapable of producing explanations for a multitask prediction model with a
single explainer. They are also unable to provide explanations in cases where
the GNN is trained in a self-supervised manner, and the resulting
representations are used in future downstream tasks. To address these
limitations, we propose a Task-Agnostic GNN Explainer (TAGE) trained under
self-supervision with no knowledge of downstream tasks. TAGE enables the
explanation of GNN embedding models without downstream tasks and allows
efficient explanation of multitask models. Our extensive experiments show that
TAGE can significantly speed up the explanation efficiency by using the same
model to explain predictions for multiple downstream tasks while achieving
explanation quality as good as or even better than current state-of-the-art GNN
explanation approaches.
Related papers
- Towards Few-shot Self-explaining Graph Neural Networks [16.085176689122036]
We propose a novel framework that generates explanations to support predictions in few-shot settings.
MSE-GNN adopts a two-stage self-explaining structure, consisting of an explainer and a predictor.
We show that MSE-GNN can achieve superior performance on prediction tasks while generating high-quality explanations.
arXiv Detail & Related papers (2024-08-14T07:31:11Z) - SES: Bridging the Gap Between Explainability and Prediction of Graph Neural Networks [13.655670509818144]
We propose a self-explained and self-supervised graph neural network (SES) to bridge the gap between explainability and prediction.
SES comprises two processes: explainable training and enhanced predictive learning.
arXiv Detail & Related papers (2024-07-16T03:46:57Z) - How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Toward Multiple Specialty Learners for Explaining GNNs via Online
Knowledge Distillation [0.17842332554022688]
Graph Neural Networks (GNNs) have become increasingly ubiquitous in numerous applications and systems, necessitating explanations of their predictions.
We propose a novel GNN explanation framework named SCALE, which is general and fast for explaining predictions.
In training, a black-box GNN model guides learners based on an online knowledge distillation paradigm.
Specifically, edge masking and random walk with restart procedures are executed to provide structural explanations for graph-level and node-level predictions.
arXiv Detail & Related papers (2022-10-20T08:44:57Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Jointly Attacking Graph Neural Network and its Explanations [50.231829335996814]
Graph Neural Networks (GNNs) have boosted the performance for many graph-related tasks.
Recent studies have shown that GNNs are highly vulnerable to adversarial attacks, where adversaries can mislead the GNNs' prediction by modifying graphs.
We propose a novel attack framework (GEAttack) which can attack both a GNN model and its explanations by simultaneously exploiting their vulnerabilities.
arXiv Detail & Related papers (2021-08-07T07:44:33Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.