Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation
- URL: http://arxiv.org/abs/2110.06290v1
- Date: Tue, 12 Oct 2021 19:24:42 GMT
- Title: Scalable Consistency Training for Graph Neural Networks via
Self-Ensemble Self-Distillation
- Authors: Cole Hawkins, Vassilis N. Ioannidis, Soji Adeshina, George Karypis
- Abstract summary: We introduce a novel consistency training method to improve accuracy of graph neural networks (GNNs)
For a target node we generate different neighborhood expansions, and distill the knowledge of the average of the predictions to the GNN.
Our method approximates the expected prediction of the possible neighborhood samples and practically only requires a few samples.
- Score: 13.815063206114713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Consistency training is a popular method to improve deep learning models in
computer vision and natural language processing. Graph neural networks (GNNs)
have achieved remarkable performance in a variety of network science learning
tasks, but to date no work has studied the effect of consistency training on
large-scale graph problems. GNNs scale to large graphs by minibatch training
and subsample node neighbors to deal with high degree nodes. We utilize the
randomness inherent in the subsampling of neighbors and introduce a novel
consistency training method to improve accuracy. For a target node we generate
different neighborhood expansions, and distill the knowledge of the average of
the predictions to the GNN. Our method approximates the expected prediction of
the possible neighborhood samples and practically only requires a few samples.
We demonstrate that our training method outperforms standard GNN training in
several different settings, and yields the largest gains when label rates are
low.
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - A Local Graph Limits Perspective on Sampling-Based GNNs [7.601210044390688]
We propose a theoretical framework for training Graph Neural Networks (GNNs) on large input graphs via training on small, fixed-size sampled subgraphs.
We prove that parameters learned from training sampling-based GNNs on small samples of a large input graph are within an $epsilon$-neighborhood of the outcome of training the same architecture on the whole graph.
arXiv Detail & Related papers (2023-10-17T02:58:49Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Distributed Graph Neural Network Training: A Survey [51.77035975191926]
Graph neural networks (GNNs) are a type of deep learning models that are trained on graphs and have been successfully applied in various domains.
Despite the effectiveness of GNNs, it is still challenging for GNNs to efficiently scale to large graphs.
As a remedy, distributed computing becomes a promising solution of training large-scale GNNs.
arXiv Detail & Related papers (2022-11-01T01:57:00Z) - Characterizing and Understanding Distributed GNN Training on GPUs [2.306379679349986]
Graph neural network (GNN) has been demonstrated to be a powerful model in many domains for its effectiveness in learning over graphs.
To scale GNN training for large graphs, a widely adopted approach is distributed training which accelerates training using multiple computing nodes.
arXiv Detail & Related papers (2022-04-18T03:47:28Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Training Graph Neural Networks by Graphon Estimation [2.5997274006052544]
We propose to train a graph neural network via resampling from a graphon estimate obtained from the underlying network data.
We show that our approach is competitive with and in many cases outperform the other over-smoothing reducing GNN training methods.
arXiv Detail & Related papers (2021-09-04T19:21:48Z) - Very Deep Graph Neural Networks Via Noise Regularisation [57.450532911995516]
Graph Neural Networks (GNNs) perform learned message passing over an input graph.
We train a deep GNN with up to 100 message passing steps and achieve several state-of-the-art results.
arXiv Detail & Related papers (2021-06-15T08:50:10Z) - Scalable Graph Neural Network Training: The Case for Sampling [4.9201378771958675]
Graph Neural Networks (GNNs) are a new and increasingly popular family of deep neural network architectures to perform learning on graphs.
Training them efficiently is challenging due to the irregular nature of graph data.
Two different approaches have emerged in the literature: whole-graph and sample-based training.
arXiv Detail & Related papers (2021-05-05T20:44:10Z) - Hop Sampling: A Simple Regularized Graph Learning for Non-Stationary
Environments [12.251253742049437]
Graph representation learning is gaining popularity in a wide range of applications, such as social networks analysis.
Applying graph neural networks (GNNs) in a real-world application is still challenging due to non-stationary environments.
We present Hop Sampling, a straightforward regularization method that can effectively prevent GNNs from overfishing.
arXiv Detail & Related papers (2020-06-26T10:22:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.