GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration
- URL: http://arxiv.org/abs/2201.08475v1
- Date: Thu, 20 Jan 2022 22:30:59 GMT
- Title: GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration
- Authors: Stefan Abi-Karam, Yuqi He, Rishov Sarkar, Lakshmi Sathidevi, Zihang
Qiao, Cong Hao
- Abstract summary: We propose a generic GNN acceleration framework using High-Level Synthesis (HLS), named GenGNN.
We aim to deliver ultra-fast GNN inference without any graph pre-processing for real-time requirements.
We verify our implementation on-board on the Xilinx Alveo U50 FPGA and observe a speed-up of up to 25x against CPU (6226R) baseline and 13x against GPU (A6000) baseline.
- Score: 1.460161657933122
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph neural networks (GNNs) have recently exploded in popularity thanks to
their broad applicability to ubiquitous graph-related problems such as quantum
chemistry, drug discovery, and high energy physics. However, meeting demand for
novel GNN models and fast inference simultaneously is challenging because of
the gap between the difficulty in developing efficient FPGA accelerators and
the rapid pace of creation of new GNN models. Prior art focuses on the
acceleration of specific classes of GNNs but lacks the generality to work
across existing models or to extend to new and emerging GNN models. In this
work, we propose a generic GNN acceleration framework using High-Level
Synthesis (HLS), named GenGNN, with two-fold goals. First, we aim to deliver
ultra-fast GNN inference without any graph pre-processing for real-time
requirements. Second, we aim to support a diverse set of GNN models with the
extensibility to flexibly adapt to new models. The framework features an
optimized message-passing structure applicable to all models, combined with a
rich library of model-specific components. We verify our implementation
on-board on the Xilinx Alveo U50 FPGA and observe a speed-up of up to 25x
against CPU (6226R) baseline and 13x against GPU (A6000) baseline. Our HLS code
will be open-source on GitHub upon acceptance.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Accelerating Generic Graph Neural Networks via Architecture, Compiler,
Partition Method Co-Design [15.500725014235412]
Graph neural networks (GNNs) have shown significant accuracy improvements in a variety of graph learning domains.
It is essential to develop high-performance and efficient hardware acceleration for GNN models.
Designers face two fundamental challenges: the high bandwidth requirement of GNN models and the diversity of GNN models.
arXiv Detail & Related papers (2023-08-16T07:05:47Z) - Graph Ladling: Shockingly Simple Parallel GNN Training without
Intermediate Communication [100.51884192970499]
GNNs are a powerful family of neural networks for learning over graphs.
scaling GNNs either by deepening or widening suffers from prevalent issues of unhealthy gradients, over-smoothening, information squashing.
We propose not to deepen or widen current GNNs, but instead present a data-centric perspective of model soups tailored for GNNs.
arXiv Detail & Related papers (2023-06-18T03:33:46Z) - DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference [2.2721856484014373]
We propose DGNN-Booster, which is a novel Field-Programmable Gate Array (FPGA) accelerator framework for real-time DGNN inference.
We show that DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline (6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the FPGA baseline.
arXiv Detail & Related papers (2023-04-13T21:50:23Z) - GNNBuilder: An Automated Framework for Generic Graph Neural Network
Accelerator Generation, Simulation, and Optimization [2.2721856484014373]
We propose GNNBuilder, the first automated, generic, end-to-end GNN accelerator generation framework.
It features four advantages: (1) GNNBuilder can automatically generate GNN accelerators for a wide range of GNN models arbitrarily defined by users; (2) GNNBuilder takes standard PyTorch programming interface, introducing zero overhead for algorithm developers; (3) GNNBuilder supports end-to-end code generation, simulation, accelerator optimization, and hardware deployment; (4) GNNBuilder is equipped with accurate performance models of its generated accelerator.
arXiv Detail & Related papers (2023-03-29T05:08:21Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - FlowGNN: A Dataflow Architecture for Universal Graph Neural Network
Inference via Multi-Queue Streaming [1.566528527065232]
Graph neural networks (GNNs) have recently exploded in popularity thanks to their broad applicability to graph-related problems.
Meeting demand for novel GNN models and fast inference simultaneously is challenging because of the gap between developing efficient accelerators and the rapid creation of new GNN models.
We propose a generic dataflow architecture for GNN acceleration, named FlowGNN, which can flexibly support the majority of message-passing GNNs.
arXiv Detail & Related papers (2022-04-27T17:59:25Z) - BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant
Weight Matrices [9.406007544032848]
Graph Neural Networks (GNNs) are state-of-the-art algorithms for analyzing non-euclidean graph data.
How to inference GNNs in real time has become a challenging problem for some resource-limited edge-computing platforms.
We propose BlockGNN, a software- hardware co-design approach to realize efficient GNN acceleration.
arXiv Detail & Related papers (2021-04-13T14:09:22Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - GPT-GNN: Generative Pre-Training of Graph Neural Networks [93.35945182085948]
Graph neural networks (GNNs) have been demonstrated to be powerful in modeling graph-structured data.
We present the GPT-GNN framework to initialize GNNs by generative pre-training.
We show that GPT-GNN significantly outperforms state-of-the-art GNN models without pre-training by up to 9.1% across various downstream tasks.
arXiv Detail & Related papers (2020-06-27T20:12:33Z) - Eigen-GNN: A Graph Structure Preserving Plug-in for GNNs [95.63153473559865]
Graph Neural Networks (GNNs) are emerging machine learning models on graphs.
Most existing GNN models in practice are shallow and essentially feature-centric.
We show empirically and analytically that the existing shallow GNNs cannot preserve graph structures well.
We propose Eigen-GNN, a plug-in module to boost GNNs ability in preserving graph structures.
arXiv Detail & Related papers (2020-06-08T02:47:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.