On Explainability of Graph Neural Networks via Subgraph Explorations
- URL: http://arxiv.org/abs/2102.05152v1
- Date: Tue, 9 Feb 2021 22:12:26 GMT
- Title: On Explainability of Graph Neural Networks via Subgraph Explorations
- Authors: Hao Yuan, Haiyang Yu, Jie Wang, Kang Li, Shuiwang Ji
- Abstract summary: We propose a novel method, known as SubgraphX, to explain graph neural networks (GNNs)
Our work represents the first attempt to explain GNNs via identifying subgraphs explicitly.
- Score: 48.56936527708657
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of explaining the predictions of graph neural
networks (GNNs), which otherwise are considered as black boxes. Existing
methods invariably focus on explaining the importance of graph nodes or edges
but ignore the substructures of graphs, which are more intuitive and
human-intelligible. In this work, we propose a novel method, known as
SubgraphX, to explain GNNs by identifying important subgraphs. Given a trained
GNN model and an input graph, our SubgraphX explains its predictions by
efficiently exploring different subgraphs with Monte Carlo tree search. To make
the tree search more effective, we propose to use Shapley values as a measure
of subgraph importance, which can also capture the interactions among different
subgraphs. To expedite computations, we propose efficient approximation schemes
to compute Shapley values for graph data. Our work represents the first attempt
to explain GNNs via identifying subgraphs explicitly. Experimental results show
that our SubgraphX achieves significantly improved explanations, while keeping
computations at a reasonable level.
Related papers
- SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - Generating In-Distribution Proxy Graphs for Explaining Graph Neural Networks [17.71313964436965]
A popular paradigm for the explainability of GNNs is to identify explainable subgraphs by comparing their labels with the ones of original graphs.
This task is challenging due to the substantial distributional shift from the original graphs in the training set to the set of explainable subgraphs.
We propose a novel method that generates proxy graphs for explainable subgraphs that are in the distribution of training data.
arXiv Detail & Related papers (2024-02-03T05:19:02Z) - PipeNet: Question Answering with Semantic Pruning over Knowledge Graphs [56.5262495514563]
We propose a grounding-pruning-reasoning pipeline to prune noisy computation nodes.
We also propose a graph attention network (GAT) based module to reason with the subgraph data.
arXiv Detail & Related papers (2024-01-31T01:37:33Z) - Rethinking Explaining Graph Neural Networks via Non-parametric Subgraph
Matching [68.35685422301613]
We propose a novel non-parametric subgraph matching framework, dubbed MatchExplainer, to explore explanatory subgraphs.
It couples the target graph with other counterpart instances and identifies the most crucial joint substructure by minimizing the node corresponding-based distance.
Experiments on synthetic and real-world datasets show the effectiveness of our MatchExplainer by outperforming all state-of-the-art parametric baselines with significant margins.
arXiv Detail & Related papers (2023-01-07T05:14:45Z) - FoSR: First-order spectral rewiring for addressing oversquashing in GNNs [0.0]
Graph neural networks (GNNs) are able to leverage the structure of graph data by passing messages along the edges of the graph.
We propose a computationally efficient algorithm that prevents oversquashing by systematically adding edges to the graph.
We find experimentally that our algorithm outperforms existing graph rewiring methods in several graph classification tasks.
arXiv Detail & Related papers (2022-10-21T07:58:03Z) - MotifExplainer: a Motif-based Graph Neural Network Explainer [19.64574177805823]
We propose a novel method to explain Graph Neural Networks (GNNs) by identifying important motifs, recurrent and statistically significant patterns in graphs.
Our proposed motif-based methods can provide better human-understandable explanations than methods based on nodes, edges, and regular subgraphs.
arXiv Detail & Related papers (2022-02-01T16:11:21Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.