Bi-GCN: Binary Graph Convolutional Network
- URL: http://arxiv.org/abs/2010.07565v2
- Date: Thu, 8 Apr 2021 12:51:30 GMT
- Title: Bi-GCN: Binary Graph Convolutional Network
- Authors: Junfu Wang, Yunhong Wang, Zhen Yang, Liang Yang, Yuanfang Guo
- Abstract summary: We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node features.
Our Bi-GCN can reduce the memory consumption by an average of 30x for both the network parameters and input data, and accelerate the inference speed by an average of 47x.
- Score: 57.733849700089955
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have achieved tremendous success in graph
representation learning. Unfortunately, current GNNs usually rely on loading
the entire attributed graph into network for processing. This implicit
assumption may not be satisfied with limited memory resources, especially when
the attributed graph is large. In this paper, we pioneer to propose a Binary
Graph Convolutional Network (Bi-GCN), which binarizes both the network
parameters and input node features. Besides, the original matrix
multiplications are revised to binary operations for accelerations. According
to the theoretical analysis, our Bi-GCN can reduce the memory consumption by an
average of ~30x for both the network parameters and input data, and accelerate
the inference speed by an average of ~47x, on the citation networks. Meanwhile,
we also design a new gradient approximation based back-propagation method to
train our Bi-GCN well. Extensive experiments have demonstrated that our Bi-GCN
can give a comparable performance compared to the full-precision baselines.
Besides, our binarization approach can be easily applied to other GNNs, which
has been verified in the experiments.
Related papers
- Cached Operator Reordering: A Unified View for Fast GNN Training [24.917363701638607]
Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering.
However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks.
We address these challenges by providing a unified view of GNN computation, I/O, and memory.
arXiv Detail & Related papers (2023-08-23T12:27:55Z) - BitGNN: Unleashing the Performance Potential of Binary Graph Neural
Networks on GPUs [19.254040098787893]
Recent studies have shown that Binary Graph Neural Networks (GNNs) are promising for saving computations of GNNs through binarized tensors.
This work redesigns the binary GNN inference from the efficiency perspective.
Results on real-world graphs with GCNs, GraphSAGE, and GraphSAINT show that the proposed techniques outperform state-of-the-art binary GNN implementations by 8-22X with the same accuracy maintained.
arXiv Detail & Related papers (2023-05-04T03:20:25Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Binary Graph Neural Networks [69.51765073772226]
Graph Neural Networks (GNNs) have emerged as a powerful and flexible framework for representation learning on irregular data.
In this paper, we present and evaluate different strategies for the binarization of graph neural networks.
We show that through careful design of the models, and control of the training process, binary graph neural networks can be trained at only a moderate cost in accuracy on challenging benchmarks.
arXiv Detail & Related papers (2020-12-31T18:48:58Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.