Scalable Graph Neural Network Training: The Case for Sampling
- URL: http://arxiv.org/abs/2105.02315v1
- Date: Wed, 5 May 2021 20:44:10 GMT
- Title: Scalable Graph Neural Network Training: The Case for Sampling
- Authors: Marco Serafini, Hui Guan
- Abstract summary: Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
- Score: 4.9201378771958675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph Neural Networks (GNNs) are a new and increasingly popular family of
deep neural network architectures to perform learning on graphs. Training them
efficiently is challenging due to the irregular nature of graph data. The
problem becomes even more challenging when scaling to large graphs that exceed
the capacity of single devices. Standard approaches to distributed DNN
training, such as data and model parallelism, do not directly apply to GNNs.
Instead, two different approaches have emerged in the literature: whole-graph
and sample-based training.
In this paper, we review and compare the two approaches. Scalability is
challenging with both approaches, but we make a case that research should focus
on sample-based training since it is a more promising approach. Finally, we
review recent systems supporting sample-based training.
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Learning to Reweight for Graph Neural Network [63.978102332612906]
Graph Neural Networks (GNNs) show promising results for graph tasks.
Existing GNNs' generalization ability will degrade when there exist distribution shifts between testing and training graph data.
We propose a novel nonlinear graph decorrelation method, which can substantially improve the out-of-distribution generalization ability.
arXiv Detail & Related papers (2023-12-19T12:25:10Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Unlearning Graph Classifiers with Limited Data Resources [39.29148804411811]
Controlled data removal is becoming an important feature of machine learning models for data-sensitive Web applications.
It is still largely unknown how to perform efficient machine unlearning of graph neural networks (GNNs)
Our main contribution is the first known nonlinear approximate graph unlearning method based on GSTs.
Our second contribution is a theoretical analysis of the computational complexity of the proposed unlearning mechanism.
Our third contribution are extensive simulation results which show that, compared to complete retraining of GNNs after each removal request, the new GST-based approach offers, on average, a 10.38x speed-up
arXiv Detail & Related papers (2022-11-06T20:46:50Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Distributionally Robust Semi-Supervised Learning Over Graphs [68.29280230284712]
Semi-supervised learning (SSL) over graph-structured data emerges in many network science applications.
To efficiently manage learning over graphs, variants of graph neural networks (GNNs) have been developed recently.
Despite their success in practice, most of existing methods are unable to handle graphs with uncertain nodal attributes.
Challenges also arise due to distributional uncertainties associated with data acquired by noisy measurements.
A distributionally robust learning framework is developed, where the objective is to train models that exhibit quantifiable robustness against perturbations.
arXiv Detail & Related papers (2021-10-20T14:23:54Z) - Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation [13.815063206114713]
We introduce a novel consistency training method to improve accuracy of graph neural networks (GNNs)
For a target node we generate different neighborhood expansions, and distill the knowledge of the average of the predictions to the GNN.
Our method approximates the expected prediction of the possible neighborhood samples and practically only requires a few samples.
arXiv Detail & Related papers (2021-10-12T19:24:42Z) - How Neural Processes Improve Graph Link Prediction [35.652234989200956]
We propose a meta-learning approach with graph neural networks for link prediction: Neural Processes for Graph Neural Networks (NPGNN)
NPGNN can perform both transductive and inductive learning tasks and adapt to patterns in a large new graph after training with a small subgraph.
arXiv Detail & Related papers (2021-09-30T07:35:13Z) - Training Graph Neural Networks by Graphon Estimation [2.5997274006052544]
We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
arXiv Detail & Related papers (2021-09-04T19:21:48Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.