VersaGNN: a Versatile accelerator for Graph neural networks
- URL: http://arxiv.org/abs/2105.01280v1
- Date: Tue, 4 May 2021 04:10:48 GMT
- Title: VersaGNN: a Versatile accelerator for Graph neural networks
- Authors: Feng Shi, Ahren Yiqiao Jin, Song-Chun Zhu
- Abstract summary: We propose textitVersaGNN, an ultra-efficient, systolic-array-based versatile hardware accelerator.
textitVersaGNN achieves on average 3712$times$ speedup with 1301.25$times$ energy reduction on CPU, and 35.4$times$ speedup with 17.66$times$ energy reduction on GPU.
- Score: 81.1667080640009
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: \textit{Graph Neural Network} (GNN) is a promising approach for analyzing
graph-structured data that tactfully captures their dependency information via
node-level message passing. It has achieved state-of-the-art performances in
many tasks, such as node classification, graph matching, clustering, and graph
generation. As GNNs operate on non-Euclidean data, their irregular data access
patterns cause considerable computational costs and overhead on conventional
architectures, such as GPU and CPU. Our analysis shows that GNN adopts a hybrid
computing model. The \textit{Aggregation} (or \textit{Message Passing}) phase
performs vector additions where vectors are fetched with irregular strides. The
\textit{Transformation} (or \textit{Node Embedding}) phase can be either dense
or sparse-dense matrix multiplication. In this work, We propose
\textit{VersaGNN}, an ultra-efficient, systolic-array-based versatile hardware
accelerator that unifies dense and sparse matrix multiplication. By applying
this single optimized systolic array to both aggregation and transformation
phases, we have significantly reduced chip sizes and energy consumption. We
then divide the computing engine into blocked systolic arrays to support the
\textit{Strassen}'s algorithm for dense matrix multiplication, dramatically
scaling down the number of multiplications and enabling high-throughput
computation of GNNs. To balance the workload of sparse-dense matrix
multiplication, we also introduced a greedy algorithm to combine sparse
sub-matrices of compressed format into condensed ones to reduce computational
cycles. Compared with current state-of-the-art GNN software frameworks,
\textit{VersaGNN} achieves on average 3712$\times$ speedup with 1301.25$\times$
energy reduction on CPU, and 35.4$\times$ speedup with 17.66$\times$ energy
reduction on GPU.
Related papers
- CiliaGraph: Enabling Expression-enhanced Hyper-Dimensional Computation in Ultra-Lightweight and One-Shot Graph Classification on Edge [1.8726646412385333]
CiliaGraph is an enhanced expressive yet ultra-lightweight HDC model for graph classification.
CiliaGraph reduces memory usage and accelerates training speed by an average of 292 times.
arXiv Detail & Related papers (2024-05-29T12:22:59Z) - Graph neural networks with configuration cross-attention for tensor compilers [0.157286095422595]
We propose TGraph, a neural graph architecture that allows screening for fast configurations of the target computational graph.
We estimate the potential CO$ emission reduction associated with our work to be equivalent to over 50% of the total household emissions in areas hosting AI-oriented data centers.
arXiv Detail & Related papers (2024-05-26T16:39:19Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Batch-efficient EigenDecomposition for Small and Medium Matrices [65.67315418971688]
EigenDecomposition (ED) is at the heart of many computer vision algorithms and applications.
We propose a QR-based ED method dedicated to the application scenarios of computer vision.
arXiv Detail & Related papers (2022-07-09T09:14:12Z) - Nimble GNN Embedding with Tensor-Train Decomposition [10.726368002799765]
This paper describes a new method for representing embedding tables of graph neural networks (GNNs) more compactly via tensor-train (TT) decomposition.
In some cases, our model without explicit node features on input can even match the accuracy of models that use node features.
arXiv Detail & Related papers (2022-06-21T17:57:35Z) - Neighbor2Seq: Deep Learning on Massive Graphs by Transforming Neighbors
to Sequences [55.329402218608365]
We propose the Neighbor2Seq to transform the hierarchical neighborhood of each node into a sequence.
We evaluate our method on a massive graph with more than 111 million nodes and 1.6 billion edges.
Results show that our proposed method is scalable to massive graphs and achieves superior performance across massive and medium-scale graphs.
arXiv Detail & Related papers (2022-02-07T16:38:36Z) - Instant Neural Graphics Primitives with a Multiresolution Hash Encoding [67.33850633281803]
We present a versatile new input encoding that permits the use of a smaller network without sacrificing quality.
A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through a gradient descent.
We achieve a combined speed of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds.
arXiv Detail & Related papers (2022-01-16T07:22:47Z) - DistGNN: Scalable Distributed Training for Large-Scale Graph Neural
Networks [58.48833325238537]
Full-batch training on Graph Neural Networks (GNN) to learn the structure of large graphs is a critical problem that needs to scale to hundreds of compute nodes to be feasible.
In this paper, we presentGNN that optimize the well-known Deep Graph Library (DGL) for full-batch training on CPU clusters.
Our results on four common GNN benchmark datasets show up to 3.7x speed-up using a single CPU socket and up to 97x speed-up using 128 CPU sockets.
arXiv Detail & Related papers (2021-04-14T08:46:35Z) - Accelerating Sparse DNN Models without Hardware-Support via Tile-Wise
Sparsity [12.643043455369297]
We propose an algorithm-software co-designed pruning method that achieves latency speedups on existing dense architectures.
We implement and evaluate the sparsity pattern on GPU tensor core, achieving a 1.95x speedup over the dense model.
arXiv Detail & Related papers (2020-08-29T16:27:41Z) - Reducing Communication in Graph Neural Network Training [0.0]
Graph Neural Networks (GNNs) are powerful and flexible neural networks that use the naturally sparse connectivity information of the data.
We introduce a family of parallel algorithms for training GNNs and show that they canally reduce communication compared to previous parallel GNN training methods.
arXiv Detail & Related papers (2020-05-07T07:45:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.