FlowX: Towards Explainable Graph Neural Networks via Message Flows
- URL: http://arxiv.org/abs/2206.12987v3
- Date: Fri, 29 Dec 2023 21:28:49 GMT
- Title: FlowX: Towards Explainable Graph Neural Networks via Message Flows
- Authors: Shurui Gui, Hao Yuan, Jie Wang, Qicheng Lao, Kang Li, Shuiwang Ji
- Abstract summary: We investigate the explainability of graph neural networks (GNNs) as a step toward elucidating their working mechanisms.
We propose a novel method here, known as FlowX, to explain GNNs by identifying important message flows.
We then propose an information-controlled learning algorithm to train flow scores toward diverse explanation targets.
- Score: 59.025023020402365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We investigate the explainability of graph neural networks (GNNs) as a step
toward elucidating their working mechanisms. While most current methods focus
on explaining graph nodes, edges, or features, we argue that, as the inherent
functional mechanism of GNNs, message flows are more natural for performing
explainability. To this end, we propose a novel method here, known as FlowX, to
explain GNNs by identifying important message flows. To quantify the importance
of flows, we propose to follow the philosophy of Shapley values from
cooperative game theory. To tackle the complexity of computing all coalitions'
marginal contributions, we propose a flow sampling scheme to compute Shapley
value approximations as initial assessments of further training. We then
propose an information-controlled learning algorithm to train flow scores
toward diverse explanation targets: necessary or sufficient explanations.
Experimental studies on both synthetic and real-world datasets demonstrate that
our proposed FlowX and its variants lead to improved explainability of GNNs.
The code is available at https://github.com/divelab/DIG.
Related papers
- How Graph Neural Networks Learn: Lessons from Training Dynamics [80.41778059014393]
We study the training dynamics in function space of graph neural networks (GNNs)
We find that the gradient descent optimization of GNNs implicitly leverages the graph structure to update the learned function.
This finding offers new interpretable insights into when and why the learned GNN functions generalize.
arXiv Detail & Related papers (2023-10-08T10:19:56Z) - Towards Robust Fidelity for Evaluating Explainability of Graph Neural Networks [32.345435955298825]
Graph Neural Networks (GNNs) are neural models that leverage the dependency structure in graphical data via message passing among the graph nodes.
A main challenge in studying GNN explainability is to provide fidelity measures that evaluate the performance of these explanation functions.
This paper studies this foundational challenge, spotlighting the inherent limitations of prevailing fidelity metrics.
arXiv Detail & Related papers (2023-10-03T06:25:14Z) - Invertible Neural Networks for Graph Prediction [22.140275054568985]
In this work, we address conditional generation using deep invertible neural networks.
We adopt an end-to-end training approach since our objective is to address prediction and generation in the forward and backward processes at once.
arXiv Detail & Related papers (2022-06-02T17:28:33Z) - Deep Architecture Connectivity Matters for Its Convergence: A
Fine-Grained Analysis [94.64007376939735]
We theoretically characterize the impact of connectivity patterns on the convergence of deep neural networks (DNNs) under gradient descent training.
We show that by a simple filtration on "unpromising" connectivity patterns, we can trim down the number of models to evaluate.
arXiv Detail & Related papers (2022-05-11T17:43:54Z) - A Meta-Learning Approach for Training Explainable Graph Neural Networks [10.11960004698409]
We propose a meta-learning framework for improving the level of explainability of a GNN directly at training time.
Our framework jointly trains a model to solve the original task, e.g., node classification, and to provide easily processable outputs for downstream algorithms.
Our model-agnostic approach can improve the explanations produced for different GNN architectures and use any instance-based explainer to drive this process.
arXiv Detail & Related papers (2021-09-20T11:09:10Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Zorro: Valid, Sparse, and Stable Explanations in Graph Neural Networks [6.004582130591279]
We find that the previous explanation generation approaches that maximize the mutual information between the label distribution produced by the GNN model and the explanation to be restrictive.
Specifically, existing approaches do not enforce explanations to be predictive, sparse, or robust to input perturbations.
We propose a novel approach Zorro based on the principles from rate-distortion theory that uses a simple procedure to optimize for fidelity.
arXiv Detail & Related papers (2021-05-18T15:53:09Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.