What Do GNNs Actually Learn? Towards Understanding their Representations
- URL: http://arxiv.org/abs/2304.10851v1
- Date: Fri, 21 Apr 2023 09:52:19 GMT
- Title: What Do GNNs Actually Learn? Towards Understanding their Representations
- Authors: Giannis Nikolentzos, Michail Chatzianastasis, Michalis Vazirgiannis
- Abstract summary: We investigate which properties of graphs are captured purely by graph neural networks (GNNs)
We show that two of them embed all nodes into the same feature vector, while the other two models generate representations related to the number of walks over the input graph.
Strikingly, structurally dissimilar nodes can have similar representations at some layer $k>1$, if they have the same number of walks of length $k$.
- Score: 26.77596449192451
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In recent years, graph neural networks (GNNs) have achieved great success in
the field of graph representation learning. Although prior work has shed light
into the expressiveness of those models (\ie whether they can distinguish pairs
of non-isomorphic graphs), it is still not clear what structural information is
encoded into the node representations that are learned by those models. In this
paper, we investigate which properties of graphs are captured purely by these
models, when no node attributes are available. Specifically, we study four
popular GNN models, and we show that two of them embed all nodes into the same
feature vector, while the other two models generate representations that are
related to the number of walks over the input graph. Strikingly, structurally
dissimilar nodes can have similar representations at some layer $k>1$, if they
have the same number of walks of length $k$. We empirically verify our
theoretical findings on real datasets.
Related papers
- Do graph neural network states contain graph properties? [5.222978725954348]
We present a model explainability pipeline for Graph Neural Networks (GNNs) employing diagnostic classifiers.
This pipeline aims to probe and interpret the learned representations in GNNs across various architectures and datasets.
arXiv Detail & Related papers (2024-11-04T15:26:07Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Probing Graph Representations [77.7361299039905]
We use a probing framework to quantify the amount of meaningful information captured in graph representations.
Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models.
We advocate for probing as a useful diagnostic tool for evaluating graph-based models.
arXiv Detail & Related papers (2023-03-07T14:58:18Z) - A Topological characterisation of Weisfeiler-Leman equivalence classes [0.0]
Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
arXiv Detail & Related papers (2022-06-23T17:28:55Z) - Graph Neural Networks Designed for Different Graph Types: A Survey [0.0]
Graph Neural Networks (GNNs) address cutting-edge problems based on graph data.
It has not yet been gathered which GNN can process what kind of graph types.
We give a detailed overview of already existing GNNs and categorize them according to their ability to handle different graph types.
arXiv Detail & Related papers (2022-04-06T20:37:42Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Is Homophily a Necessity for Graph Neural Networks? [50.959340355849896]
Graph neural networks (GNNs) have shown great prowess in learning representations suitable for numerous graph-based machine learning tasks.
GNNs are widely believed to work well due to the homophily assumption ("like attracts like"), and fail to generalize to heterophilous graphs where dissimilar nodes connect.
Recent works design new architectures to overcome such heterophily-related limitations, citing poor baseline performance and new architecture improvements on a few heterophilous graph benchmark datasets as evidence for this notion.
In our experiments, we empirically find that standard graph convolutional networks (GCNs) can actually achieve better performance than
arXiv Detail & Related papers (2021-06-11T02:44:00Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.