L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph
Convolutional Networks
- URL: http://arxiv.org/abs/2003.13606v11
- Date: Sat, 4 Jul 2020 21:55:06 GMT
- Title: L$^2$-GCN: Layer-Wise and Learned Efficient Training of Graph
Convolutional Networks
- Authors: Yuning You, Tianlong Chen, Zhangyang Wang, Yang Shen
- Abstract summary: Graph convolution networks (GCN) are increasingly popular in many applications, yet remain notoriously hard to train over large graph datasets.
We propose a novel efficient layer-wise training framework for GCN (L-GCN), that disentangles feature aggregation and feature transformation during training.
Experiments show that L-GCN is faster than state-of-the-arts by at least an order of magnitude, with a consistent of memory usage not dependent on dataset size.
- Score: 118.37805042816784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph convolution networks (GCN) are increasingly popular in many
applications, yet remain notoriously hard to train over large graph datasets.
They need to compute node representations recursively from their neighbors.
Current GCN training algorithms suffer from either high computational costs
that grow exponentially with the number of layers, or high memory usage for
loading the entire graph and node embeddings. In this paper, we propose a novel
efficient layer-wise training framework for GCN (L-GCN), that disentangles
feature aggregation and feature transformation during training, hence greatly
reducing time and memory complexities. We present theoretical analysis for
L-GCN under the graph isomorphism framework, that L-GCN leads to as powerful
GCNs as the more costly conventional training algorithm does, under mild
conditions. We further propose L$^2$-GCN, which learns a controller for each
layer that can automatically adjust the training epochs per layer in L-GCN.
Experiments show that L-GCN is faster than state-of-the-arts by at least an
order of magnitude, with a consistent of memory usage not dependent on dataset
size, while maintaining comparable prediction performance. With the learned
controller, L$^2$-GCN can further cut the training time in half. Our codes are
available at https://github.com/Shen-Lab/L2-GCN.
Related papers
- T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Cached Operator Reordering: A Unified View for Fast GNN Training [24.917363701638607]
Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering.
However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks.
We address these challenges by providing a unified view of GNN computation, I/O, and memory.
arXiv Detail & Related papers (2023-08-23T12:27:55Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - BNS-GCN: Efficient Full-Graph Training of Graph Convolutional Networks
with Boundary Node Sampling [25.32242812045678]
We propose a simple yet effective method dubbed BNS-GCN that adopts random Boundary-Node-Sampling to enable efficient and scalable distributed GCN training.
Experiments and ablation studies consistently validate the effectiveness of BNS-GCN, boosting the throughput by up to 16.2x and reducing the memory usage by up to 58%, while maintaining a full-graph accuracy.
arXiv Detail & Related papers (2022-03-21T13:44:37Z) - GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm
and Accelerator Co-Design [27.311994997480745]
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model.
It can be notoriously challenging to inference GCNs over large graph datasets.
This paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity.
arXiv Detail & Related papers (2021-12-22T00:30:50Z) - GCNear: A Hybrid Architecture for Efficient GCN Training with
Near-Memory Processing [8.130391367247793]
Graph Convolutional Networks (GCNs) have become state-of-the-art algorithms for analyzing non-euclidean graph data.
It is challenging to realize efficient GCN training, especially on large graphs.
This paper presents GCNear, a hybrid architecture to tackle these challenges.
arXiv Detail & Related papers (2021-11-01T03:47:07Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Bi-GCN: Binary Graph Convolutional Network [57.733849700089955]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
arXiv Detail & Related papers (2020-10-15T07:26:23Z) - DeeperGCN: All You Need to Train Deeper GCNs [66.64739331859226]
Graph Convolutional Networks (GCNs) have been drawing significant attention with the power of representation learning on graphs.
Unlike Convolutional Neural Networks (CNNs), which are able to take advantage of stacking very deep layers, GCNs suffer from vanishing gradient, over-smoothing and over-fitting issues when going deeper.
This paper proposes DeeperGCN that is capable of successfully and reliably training very deep GCNs.
arXiv Detail & Related papers (2020-06-13T23:00:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.