DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference
- URL: http://arxiv.org/abs/2304.06831v1
- Date: Thu, 13 Apr 2023 21:50:23 GMT
- Title: DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference
- Authors: Hanqiu Chen and Cong Hao
- Abstract summary: We propose DGNN-Booster, which is a novel Field-Programmable Gate Array (FPGA) accelerator framework for real-time DGNN inference.
We show that DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline (6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the FPGA baseline.
- Score: 2.2721856484014373
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Dynamic Graph Neural Networks (DGNNs) are becoming increasingly popular due
to their effectiveness in analyzing and predicting the evolution of complex
interconnected graph-based systems. However, hardware deployment of DGNNs still
remains a challenge. First, DGNNs do not fully utilize hardware resources
because temporal data dependencies cause low hardware parallelism.
Additionally, there is currently a lack of generic DGNN hardware accelerator
frameworks, and existing GNN accelerator frameworks have limited ability to
handle dynamic graphs with changing topologies and node features. To address
the aforementioned challenges, in this paper, we propose DGNN-Booster, which is
a novel Field-Programmable Gate Array (FPGA) accelerator framework for
real-time DGNN inference using High-Level Synthesis (HLS). It includes two
different FPGA accelerator designs with different dataflows that can support
the most widely used DGNNs. We showcase the effectiveness of our designs by
implementing and evaluating two representative DGNN models on ZCU102 board and
measuring the end-to-end performance. The experiment results demonstrate that
DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline
(6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the
FPGA baseline without applying optimizations proposed in this paper. Moreover,
DGNN-Booster can achieve over 100x and over 1000x runtime energy efficiency
than the CPU and GPU baseline respectively. Our implementation code and
on-board measurements are publicly available at
https://github.com/sharc-lab/DGNN-Booster.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - GNNHLS: Evaluating Graph Neural Network Inference via High-Level
Synthesis [8.036399595635034]
We propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS.
We evaluate GNNHLS on 4 graph datasets with distinct topologies and scales.
GNNHLS achieves up to 50.8x speedup and 423x energy reduction relative to the CPU baselines.
arXiv Detail & Related papers (2023-09-27T20:58:33Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Bottleneck Analysis of Dynamic Graph Neural Network Inference on CPU and
GPU [3.4214598355901638]
Dynamic graph neural network (DGNN) is becoming increasingly popular because of its widespread use in capturing dynamic features in the real world.
deploying DGNNs on hardware presents additional challenges due to the model complexity, diversity, and the nature of the time dependency.
We select eight prevailing DGNNs with different characteristics and profile them on both CPU and GPU.
arXiv Detail & Related papers (2022-10-08T03:41:50Z) - FlowGNN: A Dataflow Architecture for Universal Graph Neural Network
Inference via Multi-Queue Streaming [1.566528527065232]
Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability to graph-related problems.
Meeting demand for novel GNN models and fast inference simultaneously is challenging because of the gap between developing efficient accelerators and the rapid creation of new GNN models.
We propose a generic dataflow architecture for GNN acceleration, named FlowGNN, which can flexibly support the majority of message-passing GNNs.
arXiv Detail & Related papers (2022-04-27T17:59:25Z) - GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration [1.460161657933122]
We propose a generic GNN acceleration framework using High-Level Synthesis (HLS), named GenGNN.
We aim to deliver ultra-fast GNN inference without any graph pre-processing for real-time requirements.
We verify our implementation on-board on the Xilinx Alveo U50 FPGA and observe a speed-up of up to 25x against CPU (6226R) baseline and 13x against GPU (A6000) baseline.
arXiv Detail & Related papers (2022-01-20T22:30:59Z) - BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant
Weight Matrices [9.406007544032848]
Graph Neural Networks (GNNs) are state-of-the-art algorithms for analyzing non-euclidean graph data.
How to inference GNNs in real time has become a challenging problem for some resource-limited edge-computing platforms.
We propose BlockGNN, a software- hardware co-design approach to realize efficient GNN acceleration.
arXiv Detail & Related papers (2021-04-13T14:09:22Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z) - Binarized Graph Neural Network [65.20589262811677]
We develop a binarized graph neural network to learn the binary representations of the nodes with binary network parameters.
Our proposed method can be seamlessly integrated into the existing GNN-based embedding approaches.
Experiments indicate that the proposed binarized graph neural network, namely BGN, is orders of magnitude more efficient in terms of both time and space.
arXiv Detail & Related papers (2020-04-19T09:43:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.