GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks
- URL: http://arxiv.org/abs/2001.06216v2
- Date: Sun, 27 Sep 2020 04:29:35 GMT
- Title: GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks
- Authors: Qiang Huang, Makoto Yamada, Yuan Tian, Dinesh Singh, Dawei Yin, Yi
Chang
- Abstract summary: Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
- Score: 45.824642013383944
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph structured data has wide applicability in various domains such as
physics, chemistry, biology, computer vision, and social networks, to name a
few. Recently, graph neural networks (GNN) were shown to be successful in
effectively representing graph structured data because of their good
performance and generalization ability. GNN is a deep learning based method
that learns a node representation by combining specific nodes and the
structural/topological information of a graph. However, like other deep models,
explaining the effectiveness of GNN models is a challenging task because of the
complex nonlinear transformations made over the iterations. In this paper, we
propose GraphLIME, a local interpretable model explanation for graphs using the
Hilbert-Schmidt Independence Criterion (HSIC) Lasso, which is a nonlinear
feature selection method. GraphLIME is a generic GNN-model explanation
framework that learns a nonlinear interpretable model locally in the subgraph
of the node being explained. More specifically, to explain a node, we generate
a nonlinear interpretable model from its $N$-hop neighborhood and then compute
the K most representative features as the explanations of its prediction using
HSIC Lasso. Through experiments on two real-world datasets, the explanations of
GraphLIME are found to be of extraordinary degree and more descriptive in
comparison to the existing explanation methods.
Related papers
- Do graph neural network states contain graph properties? [5.222978725954348]
We present a model explainability pipeline for Graph Neural Networks (GNNs) employing diagnostic classifiers.
This pipeline aims to probe and interpret the learned representations in GNNs across various architectures and datasets.
arXiv Detail & Related papers (2024-11-04T15:26:07Z) - Structural Explanations for Graph Neural Networks using HSIC [21.929646888419914]
Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner.
The complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions.
In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs.
arXiv Detail & Related papers (2023-02-04T09:46:47Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - GraphSVX: Shapley Value Explanations for Graph Neural Networks [81.83769974301995]
Graph Neural Networks (GNNs) achieve significant performance for various learning tasks on geometric data.
In this paper, we propose a unified framework satisfied by most existing GNN explainers.
We introduce GraphSVX, a post hoc local model-agnostic explanation method specifically designed for GNNs.
arXiv Detail & Related papers (2021-04-18T10:40:37Z) - Parameterized Explainer for Graph Neural Network [49.79917262156429]
We propose PGExplainer, a parameterized explainer for Graph Neural Networks (GNNs)
Compared to the existing work, PGExplainer has better generalization ability and can be utilized in an inductive setting easily.
Experiments on both synthetic and real-life datasets show highly competitive performance with up to 24.7% relative improvement in AUC on explaining graph classification.
arXiv Detail & Related papers (2020-11-09T17:15:03Z) - Graph Pooling with Node Proximity for Hierarchical Representation
Learning [80.62181998314547]
We propose a novel graph pooling strategy that leverages node proximity to improve the hierarchical representation learning of graph data with their multi-hop topology.
Results show that the proposed graph pooling strategy is able to achieve state-of-the-art performance on a collection of public graph classification benchmark datasets.
arXiv Detail & Related papers (2020-06-19T13:09:44Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - Explain Graph Neural Networks to Understand Weighted Graph Features in
Node Classification [15.41200827860072]
We propose new graph features' explanation methods to identify the informative components and important node features.
Our results demonstrate that our explanation approach can mimic data patterns used for node classification by human interpretation.
arXiv Detail & Related papers (2020-02-02T23:53:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.