Increase and Conquer: Training Graph Neural Networks on Growing Graphs
- URL: http://arxiv.org/abs/2106.03693v1
- Date: Mon, 7 Jun 2021 15:05:59 GMT
- Title: Increase and Conquer: Training Graph Neural Networks on Growing Graphs
- Authors: Juan Cervino, Luana Ruiz and Alejandro Ribeiro
- Abstract summary: We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
- Score: 116.03137405192356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) use graph convolutions to exploit network
invariances and learn meaningful features from network data. However, on
large-scale graphs convolutions incur in high computational cost, leading to
scalability limitations. Leveraging the graphon -- the limit object of a graph
-- in this paper we consider the problem of learning a graphon neural network
(WNN) -- the limit object of a GNN -- by training GNNs on graphs sampled
Bernoulli from the graphon. Under smoothness conditions, we show that: (i) the
expected distance between the learning steps on the GNN and on the WNN
decreases asymptotically with the size of the graph, and (ii) when training on
a sequence of growing graphs, gradient descent follows the learning direction
of the WNN. Inspired by these results, we propose a novel algorithm to learn
GNNs on large-scale graphs that, starting from a moderate number of nodes,
successively increases the size of the graph during training. This algorithm is
benchmarked on both a recommendation system and a decentralized control problem
where it is shown to retain comparable performance, to its large-scale
counterpart, at a reduced computational cost.
Related papers
- Can Graph Reordering Speed Up Graph Neural Network Training? An Experimental Study [13.354505458409957]
Graph neural networks (GNNs) are capable of learning on graph-structured data.
The sparsity of graphs results in suboptimal memory access patterns and longer training time.
We show that graph reordering is effective in reducing training time for CPU- and GPU-based training.
arXiv Detail & Related papers (2024-09-17T12:28:02Z) - Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity [30.2972965458946]
Graph Networks (GNNs) are widely applied to graph learning problems such as node classification.
When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph or keep the full graph adjacency and node embeddings in memory.
This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size.
arXiv Detail & Related papers (2024-06-21T18:22:11Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Transferability of Graph Neural Networks using Graphon and Sampling Theories [0.0]
Graph neural networks (GNNs) have become powerful tools for processing graph-based information in various domains.
A desirable property of GNNs is transferability, where a trained network can swap in information from a different graph without retraining and retain its accuracy.
We contribute to the application of graphons to GNNs by presenting an explicit two-layer graphon neural network (WNN) architecture.
arXiv Detail & Related papers (2023-07-25T02:11:41Z) - Training Graph Neural Networks on Growing Stochastic Graphs [114.75710379125412]
Graph Neural Networks (GNNs) rely on graph convolutions to exploit meaningful patterns in networked data.
We propose to learn GNNs on very large graphs by leveraging the limit object of a sequence of growing graphs, the graphon.
arXiv Detail & Related papers (2022-10-27T16:00:45Z) - Learning Graph Structure from Convolutional Mixtures [119.45320143101381]
We propose a graph convolutional relationship between the observed and latent graphs, and formulate the graph learning task as a network inverse (deconvolution) problem.
In lieu of eigendecomposition-based spectral methods, we unroll and truncate proximal gradient iterations to arrive at a parameterized neural network architecture that we call a Graph Deconvolution Network (GDN)
GDNs can learn a distribution of graphs in a supervised fashion, perform link prediction or edge-weight regression tasks by adapting the loss function, and they are inherently inductive.
arXiv Detail & Related papers (2022-05-19T14:08:15Z) - Transferability Properties of Graph Neural Networks [125.71771240180654]
Graph neural networks (GNNs) are provably successful at learning representations from data supported on moderate-scale graphs.
We study the problem of training GNNs on graphs of moderate size and transferring them to large-scale graphs.
Our results show that (i) the transference error decreases with the graph size, and (ii) that graph filters have a transferability-discriminability tradeoff that in GNNs is alleviated by the scattering behavior of the nonlinearity.
arXiv Detail & Related papers (2021-12-09T00:08:09Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - XGNN: Towards Model-Level Explanations of Graph Neural Networks [113.51160387804484]
Graphs neural networks (GNNs) learn node features by aggregating and combining neighbor information.
GNNs are mostly treated as black-boxes and lack human intelligible explanations.
We propose a novel approach, known as XGNN, to interpret GNNs at the model-level.
arXiv Detail & Related papers (2020-06-03T23:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.