What Do GNNs Actually Learn? Towards Understanding their Representations
- URL: http://arxiv.org/abs/2304.10851v1
- Date: Fri, 21 Apr 2023 09:52:19 GMT
- Title: What Do GNNs Actually Learn? Towards Understanding their Representations
- Authors: Giannis Nikolentzos, Michail Chatzianastasis, Michalis Vazirgiannis
- Abstract summary: We investigate which properties of graphs are captured purely by graph neural networks (GNNs)
We show that two of them embed all nodes into the same feature vector, while the other two models generate representations related to the number of walks over the input graph.
Strikingly, structurally dissimilar nodes can have similar representations at some layer $k>1$, if they have the same number of walks of length $k$.
- Score: 26.77596449192451
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In recent years, graph neural networks (GNNs) have achieved great success in
the field of graph representation learning. Although prior work has shed light
into the expressiveness of those models (\ie whether they can distinguish pairs
of non-isomorphic graphs), it is still not clear what structural information is
encoded into the node representations that are learned by those models. In this
paper, we investigate which properties of graphs are captured purely by these
models, when no node attributes are available. Specifically, we study four
popular GNN models, and we show that two of them embed all nodes into the same
feature vector, while the other two models generate representations that are
related to the number of walks over the input graph. Strikingly, structurally
dissimilar nodes can have similar representations at some layer $k>1$, if they
have the same number of walks of length $k$. We empirically verify our
theoretical findings on real datasets.
Related papers
- Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - Seq-HGNN: Learning Sequential Node Representation on Heterogeneous Graph [57.2953563124339]
We propose a novel heterogeneous graph neural network with sequential node representation, namely Seq-HGNN.
We conduct extensive experiments on four widely used datasets from Heterogeneous Graph Benchmark (HGB) and Open Graph Benchmark (OGB)
arXiv Detail & Related papers (2023-05-18T07:27:18Z) - Probing Graph Representations [77.7361299039905]
We use a probing framework to quantify the amount of meaningful information captured in graph representations.
Our findings on molecular datasets show the potential of probing for understanding the inductive biases of graph-based models.
We advocate for probing as a useful diagnostic tool for evaluating graph-based models.
arXiv Detail & Related papers (2023-03-07T14:58:18Z) - Weisfeiler and Leman go Hyperbolic: Learning Distance Preserving Node
Representations [26.77596449192451]
Graph neural networks (GNNs) have emerged as a promising tool for solving machine learning problems on graphs.
In this paper, we define a distance function between nodes which is based on the hierarchy produced by the Weisfeiler-Leman (WL) algorithm.
We propose a model that learns representations which preserve those distances between nodes.
arXiv Detail & Related papers (2022-11-04T15:03:41Z) - SEA: Graph Shell Attention in Graph Neural Networks [8.565134944225491]
A common issue in Graph Neural Networks (GNNs) is known as over-smoothing.
In our work, we relax the GNN architecture by means of implementing a routing. Specifically, the nodes' representations are routed to dedicated experts.
We call this procedure Graph Shell Attention (SEA), where experts process different subgraphs in a transformer-motivated fashion.
arXiv Detail & Related papers (2021-10-20T17:32:08Z) - Graph Neural Networks with Learnable Structural and Positional
Representations [83.24058411666483]
A major issue with arbitrary graphs is the absence of canonical positional information of nodes.
We introduce Positional nodes (PE) of nodes, and inject it into the input layer, like in Transformers.
We observe a performance increase for molecular datasets, from 2.87% up to 64.14% when considering learnable PE for both GNN classes.
arXiv Detail & Related papers (2021-10-15T05:59:15Z) - Feature Correlation Aggregation: on the Path to Better Graph Neural
Networks [37.79964911718766]
Prior to the introduction of Graph Neural Networks (GNNs), modeling and analyzing irregular data, particularly graphs, was thought to be the Achilles' heel of deep learning.
This paper introduces a central node permutation variant function through a frustratingly simple and innocent-looking modification to the core operation of a GNN.
A tangible boost in performance of the model is observed where the model surpasses previous state-of-the-art results by a significant margin while employing fewer parameters.
arXiv Detail & Related papers (2021-09-20T05:04:26Z) - Towards Self-Explainable Graph Neural Network [24.18369781999988]
Graph Neural Networks (GNNs) generalize the deep neural networks to graph-structured data.
GNNs lack explainability, which limits their adoption in scenarios that demand the transparency of models.
We propose a new framework which can find $K$-nearest labeled nodes for each unlabeled node to give explainable node classification.
arXiv Detail & Related papers (2021-08-26T22:45:11Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - GraphLIME: Local Interpretable Model Explanations for Graph Neural
Networks [45.824642013383944]
Graph neural networks (GNN) were shown to be successful in effectively representing graph structured data.
We propose GraphLIME, a local interpretable model explanation for graphs using the Hilbert-Schmidt Independence Criterion (HSIC) Lasso.
arXiv Detail & Related papers (2020-01-17T09:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.