Graph Neural Tangent Kernel: Convergence on Large Graphs
- URL: http://arxiv.org/abs/2301.10808v2
- Date: Wed, 31 May 2023 19:27:16 GMT
- Title: Graph Neural Tangent Kernel: Convergence on Large Graphs
- Authors: Sanjukta Krishnagopal, Luana Ruiz
- Abstract summary: Graph neural networks (GNNs) achieve remarkable performance in graph machine learning tasks.
We investigate the training dynamics of large-graph GNNs using graph neural kernels (GNTKs) and graphons.
- Score: 7.624781434274796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph neural networks (GNNs) achieve remarkable performance in graph machine
learning tasks but can be hard to train on large-graph data, where their
learning dynamics are not well understood. We investigate the training dynamics
of large-graph GNNs using graph neural tangent kernels (GNTKs) and graphons. In
the limit of large width, optimization of an overparametrized NN is equivalent
to kernel regression on the NTK. Here, we investigate how the GNTK evolves as
another independent dimension is varied: the graph size. We use graphons to
define limit objects -- graphon NNs for GNNs, and graphon NTKs for GNTKs -- ,
and prove that, on a sequence of graphs, the GNTKs converge to the graphon NTK.
We further prove that the spectrum of the GNTK, which is related to the
directions of fastest learning which becomes relevant during early stopping,
converges to the spectrum of the graphon NTK. This implies that in the
large-graph limit, the GNTK fitted on a graph of moderate size can be used to
solve the same task on the large graph, and to infer the learning dynamics of
the large-graph GNN. These results are verified empirically on node regression
and classification tasks.
Related papers
- Scalable Implicit Graphon Learning [25.015678499211404]
We propose a scalable method that combines implicit neural representations (INRs) and graph neural networks (GNNs) to estimate a graphon from observed graphs.
We evaluate SIGL in synthetic and real-world graphs, showing that it outperforms existing methods and scales effectively to larger graphs.
arXiv Detail & Related papers (2024-10-22T22:44:24Z) - Spectral Greedy Coresets for Graph Neural Networks [61.24300262316091]
The ubiquity of large-scale graphs in node-classification tasks hinders the real-world applications of Graph Neural Networks (GNNs)
This paper studies graph coresets for GNNs and avoids the interdependence issue by selecting ego-graphs based on their spectral embeddings.
Our spectral greedy graph coreset (SGGC) scales to graphs with millions of nodes, obviates the need for model pre-training, and applies to low-homophily graphs.
arXiv Detail & Related papers (2024-05-27T17:52:12Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - Adaptive Kernel Graph Neural Network [21.863238974404474]
Graph neural networks (GNNs) have demonstrated great success in representation learning for graph-structured data.
In this paper, we propose a novel framework - i.e., namely Adaptive Kernel Graph Neural Network (AKGNN)
AKGNN learns to adapt to the optimal graph kernel in a unified manner at the first attempt.
Experiments are conducted on acknowledged benchmark datasets and promising results demonstrate the outstanding performance of our proposed AKGNN.
arXiv Detail & Related papers (2021-12-08T20:23:58Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Graph and graphon neural network stability [122.06927400759021]
Graph networks (GNNs) are learning architectures that rely on knowledge of the graph structure to generate meaningful representations of network data.
We analyze GNN stability using kernel objects called graphons.
arXiv Detail & Related papers (2020-10-23T16:55:56Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.