GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for
Memory-Efficient Graph Convolutional Neural Networks
- URL: http://arxiv.org/abs/2203.00158v2
- Date: Wed, 2 Mar 2022 06:18:19 GMT
- Title: GROW: A Row-Stationary Sparse-Dense GEMM Accelerator for
Memory-Efficient Graph Convolutional Neural Networks
- Authors: Minhoo Kang, Ranggi Hwang, Jiwon Lee, Dongyun Kam, Youngjoo Lee,
Minsoo Rhu
- Abstract summary: A unique property of Graph convolutional neural networks (GCNs) is that its two primary execution stages, aggregation and combination, exhibit drastically different dataflows.
We present GROW, a GCN accelerator based on Gustavson's algorithm to architect a row-wise product based sparse-dense GEMM accelerator.
- Score: 4.669338722185048
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Graph convolutional neural networks (GCNs) have emerged as a key technology
in various application domains where the input data is relational. A unique
property of GCNs is that its two primary execution stages, aggregation and
combination, exhibit drastically different dataflows. Consequently, prior GCN
accelerators tackle this research space by casting the aggregation and
combination stages as a series of sparse-dense matrix multiplication. However,
prior work frequently suffers from inefficient data movements, leaving
significant performance left on the table. We present GROW, a GCN accelerator
based on Gustavson's algorithm to architect a row-wise product based
sparse-dense GEMM accelerator. GROW co-designs the software/hardware that
strikes a balance in locality and parallelism for GCNs, achieving significant
energy-efficiency improvements vs. state-of-the-art GCN accelerators.
Related papers
- Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Accel-GCN: High-Performance GPU Accelerator Design for Graph Convolution
Networks [12.181052673940465]
Graph Convolutional Networks (GCNs) are pivotal in extracting latent information from graph data across various domains.
We present Accel-GCN, a GPU accelerator architecture for GCNs.
Evaluation of Accel-GCN across 18 benchmark graphs reveals that it outperforms cuSPARSE, GNNAdvisor, and graph-BLAST by factors of 1.17 times, 1.86 times, and 2.94 times respectively.
arXiv Detail & Related papers (2023-08-22T23:12:17Z) - SGCN: Exploiting Compressed-Sparse Features in Deep Graph Convolutional
Network Accelerators [6.582242235154822]
Graph convolutional networks (GCNs) are becoming increasingly popular as they overcome the limited applicability of prior neural networks.
In this paper, we propose SGCN, a fast and energy-efficient GCN accelerator.
We show that SGCN achieves 1.71x speedup and 43.9% higher energy efficiency compared to the existing accelerators.
arXiv Detail & Related papers (2023-01-25T02:34:01Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - H-GCN: A Graph Convolutional Network Accelerator on Versal ACAP
Architecture [13.149863422504332]
H-GCN partitions each graph into three subgraphs based on its inherent heterogeneity, and processes them using PL and AIE, respectively.
Compared with state-of-the-art GNN accelerators, H-GCN achieves, on average, speedups of 1.12.3X.
arXiv Detail & Related papers (2022-06-28T03:37:31Z) - COIN: Communication-Aware In-Memory Acceleration for Graph Convolutional
Networks [2.620532065450903]
Graph convolutional networks (GCNs) have shown remarkable learning capabilities when processing graph-structured data.
This paper presents a communication-aware in-memory computing architecture (COIN) for GCN hardware acceleration.
arXiv Detail & Related papers (2022-05-15T15:29:42Z) - GCoD: Graph Convolutional Network Acceleration via Dedicated Algorithm
and Accelerator Co-Design [27.311994997480745]
Graph Convolutional Networks (GCNs) have emerged as the state-of-the-art graph learning model.
It can be notoriously challenging to inference GCNs over large graph datasets.
This paper proposes a GCN algorithm and accelerator Co-Design framework dubbed GCoD which can largely alleviate the aforementioned GCN irregularity.
arXiv Detail & Related papers (2021-12-22T00:30:50Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z) - Graph Highway Networks [77.38665506495553]
Graph Convolution Networks (GCN) are widely used in learning graph representations due to their effectiveness and efficiency.
They suffer from the notorious over-smoothing problem, in which the learned representations converge to alike vectors when many layers are stacked.
We propose Graph Highway Networks (GHNet) which utilize gating units to balance the trade-off between homogeneity and heterogeneity in the GCN learning process.
arXiv Detail & Related papers (2020-04-09T16:26:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.