What functions can Graph Neural Networks compute on random graphs? The
role of Positional Encoding
- URL: http://arxiv.org/abs/2305.14814v1
- Date: Wed, 24 May 2023 07:09:53 GMT
- Title: What functions can Graph Neural Networks compute on random graphs? The
role of Positional Encoding
- Authors: Nicolas Keriven (CNRS, IRISA), Samuel Vaiter (CNRS, LJAD)
- Abstract summary: We aim to deepen the theoretical understanding of Graph Neural Networks (GNNs) on large graphs, with a focus on their expressive power.
Recently, several works showed that, on very general random graphs models, GNNs converge to certains functions as the number of nodes grows.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We aim to deepen the theoretical understanding of Graph Neural Networks
(GNNs) on large graphs, with a focus on their expressive power. Existing
analyses relate this notion to the graph isomorphism problem, which is mostly
relevant for graphs of small sizes, or studied graph classification or
regression tasks, while prediction tasks on nodes are far more relevant on
large graphs. Recently, several works showed that, on very general random
graphs models, GNNs converge to certains functions as the number of nodes
grows. In this paper, we provide a more complete and intuitive description of
the function space generated by equivariant GNNs for node-tasks, through
general notions of convergence that encompass several previous examples. We
emphasize the role of input node features, and study the impact of node
Positional Encodings (PEs), a recent line of work that has been shown to yield
state-of-the-art results in practice. Through the study of several examples of
PEs on large random graphs, we extend previously known universality results to
significantly more general models. Our theoretical results hint at some
normalization tricks, which is shown numerically to have a positive impact on
GNN generalization on synthetic and real data. Our proofs contain new
concentration inequalities of independent interest.
Related papers
- Generalization of Geometric Graph Neural Networks [84.01980526069075]
We study the generalization capabilities of geometric graph neural networks (GNNs)
We prove a generalization gap between the optimal empirical risk and the optimal statistical risk of this GNN.
The most important observation is that the generalization capability can be realized with one large graph instead of being limited to the size of the graph as in previous results.
arXiv Detail & Related papers (2024-09-08T18:55:57Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Uplifting the Expressive Power of Graph Neural Networks through Graph
Partitioning [3.236774847052122]
We study the expressive power of graph neural networks through the lens of graph partitioning.
We introduce a novel GNN architecture, namely Graph Partitioning Neural Networks (GPNNs)
arXiv Detail & Related papers (2023-12-14T06:08:35Z) - Discovering the Representation Bottleneck of Graph Neural Networks from
Multi-order Interactions [51.597480162777074]
Graph neural networks (GNNs) rely on the message passing paradigm to propagate node features and build interactions.
Recent works point out that different graph learning tasks require different ranges of interactions between nodes.
We study two common graph construction methods in scientific domains, i.e., emphK-nearest neighbor (KNN) graphs and emphfully-connected (FC) graphs.
arXiv Detail & Related papers (2022-05-15T11:38:14Z) - Explicit Pairwise Factorized Graph Neural Network for Semi-Supervised
Node Classification [59.06717774425588]
We propose the Explicit Pairwise Factorized Graph Neural Network (EPFGNN), which models the whole graph as a partially observed Markov Random Field.
It contains explicit pairwise factors to model output-output relations and uses a GNN backbone to model input-output relations.
We conduct experiments on various datasets, which shows that our model can effectively improve the performance for semi-supervised node classification on graphs.
arXiv Detail & Related papers (2021-07-27T19:47:53Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Incomplete Graph Representation and Learning via Partial Graph Neural
Networks [7.227805463462352]
In many applications, graph may be coming in an incomplete form where attributes of graph nodes are partially unknown/missing.
Existing GNNs are generally designed on complete graphs which can not deal with attribute-incomplete graph data directly.
We develop a novel partial aggregation based GNNs, named Partial Graph Neural Networks (PaGNNs) for attribute-incomplete graph representation and learning.
arXiv Detail & Related papers (2020-03-23T08:29:59Z) - The Power of Graph Convolutional Networks to Distinguish Random Graph
Models: Short Version [27.544219236164764]
Graph convolutional networks (GCNs) are a widely used method for graph representation learning.
We investigate the power of GCNs to distinguish between different random graph models on the basis of the embeddings of their sample graphs.
arXiv Detail & Related papers (2020-02-13T17:58:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.