Ripple Walk Training: A Subgraph-based training framework for Large and
Deep Graph Neural Network
- URL: http://arxiv.org/abs/2002.07206v3
- Date: Tue, 4 May 2021 16:22:20 GMT
- Title: Ripple Walk Training: A Subgraph-based training framework for Large and
Deep Graph Neural Network
- Authors: Jiyang Bai, Yuxiang Ren, Jiawei Zhang
- Abstract summary: We propose a general subgraph-based training framework, namely Ripple Walk Training (RWT), for deep and large graph neural networks.
RWT samples subgraphs from the full graph to constitute a mini-batch, and the full GNN is updated based on the mini-batch gradient.
Extensive experiments on different sizes of graphs demonstrate the effectiveness and efficiency of RWT in training various GNNs.
- Score: 10.36962234388739
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have achieved outstanding performance in
learning graph-structured data and various tasks. However, many current GNNs
suffer from three common problems when facing large-size graphs or using a
deeper structure: neighbors explosion, node dependence, and oversmoothing. Such
problems attribute to the data structures of the graph itself or the designing
of the multi-layers GNNs framework, and can lead to low training efficiency and
high space complexity. To deal with these problems, in this paper, we propose a
general subgraph-based training framework, namely Ripple Walk Training (RWT),
for deep and large graph neural networks. RWT samples subgraphs from the full
graph to constitute a mini-batch, and the full GNN is updated based on the
mini-batch gradient. We analyze the high-quality subgraphs to train GNNs in a
theoretical way. A novel sampling method Ripple Walk Sampler works for sampling
these high-quality subgraphs to constitute the mini-batch, which considers both
the randomness and connectivity of the graph-structured data. Extensive
experiments on different sizes of graphs demonstrate the effectiveness and
efficiency of RWT in training various GNNs (GCN & GAT).
Related papers
- Stealing Training Graphs from Graph Neural Networks [54.52392250297907]
Graph Neural Networks (GNNs) have shown promising results in modeling graphs in various tasks.
As neural networks can memorize the training samples, the model parameters of GNNs have a high risk of leaking private training data.
We investigate a novel problem of stealing graphs from trained GNNs.
arXiv Detail & Related papers (2024-11-17T23:15:36Z) - Sketch-GNN: Scalable Graph Neural Networks with Sublinear Training Complexity [30.2972965458946]
Graph Networks (GNNs) are widely applied to graph learning problems such as node classification.
When scaling up the underlying graphs of GNNs to a larger size, we are forced to either train on the complete graph or keep the full graph adjacency and node embeddings in memory.
This paper proposes a sketch-based algorithm whose training time and memory grow sublinearly with respect to graph size.
arXiv Detail & Related papers (2024-06-21T18:22:11Z) - A Local Graph Limits Perspective on Sampling-Based GNNs [7.601210044390688]
We propose a theoretical framework for training Graph Neural Networks (GNNs) on large input graphs via training on small, fixed-size sampled subgraphs.
We prove that parameters learned from training sampling-based GNNs on small samples of a large input graph are within an $epsilon$-neighborhood of the outcome of training the same architecture on the whole graph.
arXiv Detail & Related papers (2023-10-17T02:58:49Z) - Label Deconvolution for Node Representation Learning on Large-scale
Attributed Graphs against Learning Bias [75.44877675117749]
We propose an efficient label regularization technique, namely Label Deconvolution (LD), to alleviate the learning bias by a novel and highly scalable approximation to the inverse mapping of GNNs.
Experiments demonstrate LD significantly outperforms state-of-the-art methods on Open Graph datasets Benchmark.
arXiv Detail & Related papers (2023-09-26T13:09:43Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - Measuring and Sampling: A Metric-guided Subgraph Learning Framework for
Graph Neural Network [11.017348743924426]
We propose a Metric-Guided (MeGuide) subgraph learning framework for Graph neural network (GNN)
MeGuide employs two novel metrics: Feature Smoothness and Connection Failure Distance to guide the subgraph sampling and mini-batch based training.
We demonstrate the effectiveness and efficiency of MeGuide in training various GNNs on multiple datasets.
arXiv Detail & Related papers (2021-12-30T11:00:00Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.