Limits, approximation and size transferability for GNNs on sparse graphs
via graphops
- URL: http://arxiv.org/abs/2306.04495v1
- Date: Wed, 7 Jun 2023 15:04:58 GMT
- Title: Limits, approximation and size transferability for GNNs on sparse graphs
via graphops
- Authors: Thien Le and Stefanie Jegelka
- Abstract summary: We take a perspective of taking limits of operators derived from graphs, such as the aggregation operation that makes up GNNs.
Our results hold for dense and sparse graphs, and various notions of graph limits.
- Score: 44.02161831977037
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Can graph neural networks generalize to graphs that are different from the
graphs they were trained on, e.g., in size? In this work, we study this
question from a theoretical perspective. While recent work established such
transferability and approximation results via graph limits, e.g., via graphons,
these only apply non-trivially to dense graphs. To include frequently
encountered sparse graphs such as bounded-degree or power law graphs, we take a
perspective of taking limits of operators derived from graphs, such as the
aggregation operation that makes up GNNs. This leads to the recently introduced
limit notion of graphops (Backhausz and Szegedy, 2022). We demonstrate how the
operator perspective allows us to develop quantitative bounds on the distance
between a finite GNN and its limit on an infinite graph, as well as the
distance between the GNN on graphs of different sizes that share structural
properties, under a regularity assumption verified for various graph sequences.
Our results hold for dense and sparse graphs, and various notions of graph
limits.
Related papers
- Zero-One Laws of Graph Neural Networks [7.783835522945603]
Graph neural networks (GNNs) are the de facto standard deep learning architectures for machine learning on graphs.
We offer a novel theoretical perspective on the representation and extrapolation capacity of GNNs.
We show that when we draw graphs of increasing size from the ErdHos-R'enyi model, the probability that such graphs are mapped to a particular output tends to either zero or to one.
arXiv Detail & Related papers (2023-01-30T17:02:23Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - A Topological characterisation of Weisfeiler-Leman equivalence classes [0.0]
Graph Neural Networks (GNNs) are learning models aimed at processing graphs and signals on graphs.
In this article, we rely on the theory of covering spaces to fully characterize the classes of graphs that GNNs cannot distinguish.
We show that the number of indistinguishable graphs in our dataset grows super-exponentially with the number of nodes.
arXiv Detail & Related papers (2022-06-23T17:28:55Z) - The Exact Class of Graph Functions Generated by Graph Neural Networks [43.25172578943894]
Graph Neural Network (GNN) whose output is identical to the graph function?
In this paper, we fully answer this question and characterize the class of graph problems that can be represented by GNNs.
We show that this condition can be efficiently verified by checking quadratically many constraints.
arXiv Detail & Related papers (2022-02-17T18:54:27Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z) - Graphon Neural Networks and the Transferability of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) rely on graph convolutions to extract local features from network data.
We introduce graphon NNs as limit objects of GNNs and prove a bound on the difference between the output of a GNN and its limit graphon-NN.
This result establishes a tradeoff between discriminability and transferability of GNNs.
arXiv Detail & Related papers (2020-06-05T16:41:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.