BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks
with Boundary Node Sampling
- URL: http://arxiv.org/abs/2203.10983v1
- Date: Mon, 21 Mar 2022 13:44:37 GMT
- Title: BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks
with Boundary Node Sampling
- Authors: Cheng Wan, Youjie Li, Ang Li, Nam Sung Kim, Yingyan Lin
- Abstract summary: We propose a simple yet effective method dubbed BNS-GCN that adopts random Boundary-Node-Sampling to enable efficient and scalable distributed GCN training.
Experiments and ablation studies consistently validate the effectiveness of BNS-GCN, boosting the throughput by up to 16.2x and reducing the memory usage by up to 58%, while maintaining a full-graph accuracy.
- Score: 25.32242812045678
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art
method for graph-based learning tasks. However, training GCNs at scale is still
challenging, hindering both the exploration of more sophisticated GCN
architectures and their applications to real-world large graphs. While it might
be natural to consider graph partition and distributed training for tackling
this challenge, this direction has only been slightly scratched the surface in
the previous works due to the limitations of existing designs. In this work, we
first analyze why distributed GCN training is ineffective and identify the
underlying cause to be the excessive number of boundary nodes of each
partitioned subgraph, which easily explodes the memory and communication costs
for GCN training. Furthermore, we propose a simple yet effective method dubbed
BNS-GCN that adopts random Boundary-Node-Sampling to enable efficient and
scalable distributed GCN training. Experiments and ablation studies
consistently validate the effectiveness of BNS-GCN, e.g., boosting the
throughput by up to 16.2x and reducing the memory usage by up to 58%, while
maintaining a full-graph accuracy. Furthermore, both theoretical and empirical
analysis show that BNS-GCN enjoys a better convergence than existing
sampling-based methods. We believe that our BNS-GCN has opened up a new
paradigm for enabling GCN training at scale. The code is available at
https://github.com/RICE-EIC/BNS-GCN.
Related papers
- Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - PipeGCN: Efficient Full-Graph Training of Graph Convolutional Networks
with Pipelined Feature Communication [24.05916878277873]
Graph Convolutional Networks (GCNs) is the state-of-the-art method for learning graph-structured data.
distributed GCN training incurs prohibitive overhead of communicating node features and feature gradients among partitions.
We propose PipeGCN, a scheme that hides the communication overhead by pipelining inter- partition communication.
arXiv Detail & Related papers (2022-03-20T02:08:03Z) - GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm
and Accelerator Co-Design [27.311994997480745]
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model.
It can be notoriously challenging to inference GCNs over large graph datasets.
This paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity.
arXiv Detail & Related papers (2021-12-22T00:30:50Z) - Bi-GCN: Binary Graph Convolutional Network [57.733849700089955]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
arXiv Detail & Related papers (2020-10-15T07:26:23Z) - Investigating and Mitigating Degree-Related Biases in Graph
Convolutional Networks [62.8504260693664]
Graph Convolutional Networks (GCNs) show promising results for semisupervised learning tasks on graphs.
In this paper, we analyze GCNs in regard to the node degree distribution.
We develop a novel Self-Supervised DegreeSpecific GCN (SL-DSGC) that mitigates the degree biases of GCNs.
arXiv Detail & Related papers (2020-06-28T16:26:47Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z) - L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph
Convolutional Networks [118.37805042816784]
Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets.
We propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training.
Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size.
arXiv Detail & Related papers (2020-03-30T16:37:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.