Zero-One Laws of Graph Neural Networks
- URL: http://arxiv.org/abs/2301.13060v5
- Date: Tue, 24 Oct 2023 09:28:08 GMT
- Title: Zero-One Laws of Graph Neural Networks
- Authors: Sam Adam-Day, Theodor Mihai Iliant, \.Ismail \.Ilkan Ceylan
- Abstract summary: Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs.
We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs.
We show that when we draw graphs of increasing size from the ErdHos-R'enyi model, the probability that such graphs are mapped to a particular output tends to either zero or to one.
- Score: 7.783835522945603
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) are the de facto standard deep learning
architectures for machine learning on graphs. This has led to a large body of
work analyzing the capabilities and limitations of these models, particularly
pertaining to their representation and extrapolation capacity. We offer a novel
theoretical perspective on the representation and extrapolation capacity of
GNNs, by answering the question: how do GNNs behave as the number of graph
nodes become very large? Under mild assumptions, we show that when we draw
graphs of increasing size from the Erd\H{o}s-R\'enyi model, the probability
that such graphs are mapped to a particular output by a class of GNN
classifiers tends to either zero or to one. This class includes the popular
graph convolutional network architecture. The result establishes 'zero-one
laws' for these GNNs, and analogously to other convergence laws, entails
theoretical limitations on their capacity. We empirically verify our results,
observing that the theoretical asymptotic limits are evident already on
relatively small graphs.
Related papers
- Graph neural networks and non-commuting operators [4.912318087940015]
We develop a limit theory of graphon-tuple neural networks and use it to prove a universal transferability theorem.
Our theoretical results extend well-known transferability theorems for GNNs to the case of several simultaneous graphs.
We derive a training procedure that provably enforces the stability of the resulting model.
arXiv Detail & Related papers (2024-11-06T21:17:14Z) - A Manifold Perspective on the Statistical Generalization of Graph Neural Networks [84.01980526069075]
We take a manifold perspective to establish the statistical generalization theory of GNNs on graphs sampled from a manifold in the spectral domain.
We prove that the generalization bounds of GNNs decrease linearly with the size of the graphs in the logarithmic scale, and increase linearly with the spectral continuity constants of the filter functions.
arXiv Detail & Related papers (2024-06-07T19:25:02Z) - Almost Surely Asymptotically Constant Graph Neural Networks [7.339728196535312]
We show that the output converges to a constant function, which upper-bounds what these classifiers can uniformly express.
This strong convergence phenomenon applies to a very wide class of GNNs, including state of the art models.
We empirically validate these findings, observing that the convergence phenomenon appears not only on random graphs but also on some real-world graphs.
arXiv Detail & Related papers (2024-03-06T17:40:26Z) - Limits, approximation and size transferability for GNNs on sparse graphs
via graphops [44.02161831977037]
We take a perspective of taking limits of operators derived from graphs, such as the aggregation operation that makes up GNNs.
Our results hold for dense and sparse graphs, and various notions of graph limits.
arXiv Detail & Related papers (2023-06-07T15:04:58Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Distance Encoding: Design Provably More Powerful Neural Networks for
Graph Representation Learning [63.97983530843762]
Graph Neural Networks (GNNs) have achieved great success in graph representation learning.
GNNs generate identical representations for graph substructures that may in fact be very different.
More powerful GNNs, proposed recently by mimicking higher-order tests, are inefficient as they cannot sparsity of underlying graph structure.
We propose Distance Depiction (DE) as a new class of graph representation learning.
arXiv Detail & Related papers (2020-08-31T23:15:40Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.