Higher-Order Explanations of Graph Neural Networks via Relevant Walks
- URL: http://arxiv.org/abs/2006.03589v3
- Date: Fri, 27 Nov 2020 04:10:00 GMT
- Title: Higher-Order Explanations of Graph Neural Networks via Relevant Walks
- Authors: Thomas Schnake, Oliver Eberle, Jonas Lederer, Shinichi Nakajima,
Kristof T. Sch\"utt, Klaus-Robert M\"uller, Gr\'egoire Montavon
- Abstract summary: Graph Neural Networks (GNNs) are a popular approach for predicting graph structured data.
In this paper, we show that GNNs can in fact be naturally explained using higher-order expansions.
We extract practically relevant insights on sentiment analysis of text data, structure-property relationships in quantum chemistry, and image classification.
- Score: 3.1510406584101776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are a popular approach for predicting graph
structured data. As GNNs tightly entangle the input graph into the neural
network structure, common explainable AI approaches are not applicable. To a
large extent, GNNs have remained black-boxes for the user so far. In this
paper, we show that GNNs can in fact be naturally explained using higher-order
expansions, i.e. by identifying groups of edges that jointly contribute to the
prediction. Practically, we find that such explanations can be extracted using
a nested attribution scheme, where existing techniques such as layer-wise
relevance propagation (LRP) can be applied at each step. The output is a
collection of walks into the input graph that are relevant for the prediction.
Our novel explanation method, which we denote by GNN-LRP, is applicable to a
broad range of graph neural networks and lets us extract practically relevant
insights on sentiment analysis of text data, structure-property relationships
in quantum chemistry, and image classification.
Related papers
- DEGREE: Decomposition Based Explanation For Graph Neural Networks [55.38873296761104]
We propose DEGREE to provide a faithful explanation for GNN predictions.
By decomposing the information generation and aggregation mechanism of GNNs, DEGREE allows tracking the contributions of specific components of the input graph to the final prediction.
We also design a subgraph level interpretation algorithm to reveal complex interactions between graph nodes that are overlooked by previous methods.
arXiv Detail & Related papers (2023-05-22T10:29:52Z) - Structural Explanations for Graph Neural Networks using HSIC [21.929646888419914]
Graph neural networks (GNNs) are a type of neural model that tackle graphical tasks in an end-to-end manner.
The complicated dynamics of GNNs make it difficult to understand which parts of the graph features contribute more strongly to the predictions.
In this study, a flexible model agnostic explanation method is proposed to detect significant structures in graphs.
arXiv Detail & Related papers (2023-02-04T09:46:47Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Explainability in subgraphs-enhanced Graph Neural Networks [12.526174412246107]
Subgraphs-enhanced Graph Neural Networks (SGNNs) have been introduced to enhance the expressive power of GNNs.
In this work, we adapt PGExplainer, one of the most recent explainers for GNNs, to SGNNs.
We show that our framework is successful in explaining the decision process of an SGNN on graph classification tasks.
arXiv Detail & Related papers (2022-09-16T13:39:10Z) - Automatic Relation-aware Graph Network Proliferation [182.30735195376792]
We propose Automatic Relation-aware Graph Network Proliferation (ARGNP) for efficiently searching GNNs.
These operations can extract hierarchical node/relational information and provide anisotropic guidance for message passing on a graph.
Experiments on six datasets for four graph learning tasks demonstrate that GNNs produced by our method are superior to the current state-of-the-art hand-crafted and search-based GNNs.
arXiv Detail & Related papers (2022-05-31T10:38:04Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Visualizing Graph Neural Networks with CorGIE: Corresponding a Graph to
Its Embedding [16.80197065484465]
We propose an approach to corresponding an input graph to its node embedding (aka latent space)
We develop an interactive multi-view interface called CorGIE to instantiate the abstraction.
We present how to use CorGIE in two usage scenarios, and conduct a case study with two GNN experts.
arXiv Detail & Related papers (2021-06-24T08:59:53Z) - A Unified View on Graph Neural Networks as Graph Signal Denoising [49.980783124401555]
Graph Neural Networks (GNNs) have risen to prominence in learning representations for graph structured data.
In this work, we establish mathematically that the aggregation processes in a group of representative GNN models can be regarded as solving a graph denoising problem.
We instantiate a novel GNN model, ADA-UGNN, derived from UGNN, to handle graphs with adaptive smoothness across nodes.
arXiv Detail & Related papers (2020-10-05T04:57:18Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.