Low Latency Edge Classification GNN for Particle Trajectory Tracking on
FPGAs
- URL: http://arxiv.org/abs/2306.11330v2
- Date: Tue, 27 Jun 2023 16:21:32 GMT
- Title: Low Latency Edge Classification GNN for Particle Trajectory Tracking on
FPGAs
- Authors: Shi-Yu Huang, Yun-Chen Yang, Yu-Ru Su, Bo-Cheng Lai, Javier Duarte,
Scott Hauck, Shih-Chieh Hsu, Jin-Xuan Hu, Mark S. Neubauer
- Abstract summary: This paper introduces a resource-efficient GNN architecture on FPGAs for low latency particle tracking.
Our results on Xilinx UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU and GPU respectively.
- Score: 10.146819379097249
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In-time particle trajectory reconstruction in the Large Hadron Collider is
challenging due to the high collision rate and numerous particle hits. Using
GNN (Graph Neural Network) on FPGA has enabled superior accuracy with flexible
trajectory classification. However, existing GNN architectures have inefficient
resource usage and insufficient parallelism for edge classification. This paper
introduces a resource-efficient GNN architecture on FPGAs for low latency
particle tracking. The modular architecture facilitates design scalability to
support large graphs. Leveraging the geometric properties of hit detectors
further reduces graph complexity and resource usage. Our results on Xilinx
UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU
and GPU respectively.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - GNNHLS: Evaluating Graph Neural Network Inference via High-Level
Synthesis [8.036399595635034]
We propose GNNHLS, an open-source framework to comprehensively evaluate GNN inference acceleration on FPGAs via HLS.
We evaluate GNNHLS on 4 graph datasets with distinct topologies and scales.
GNNHLS achieves up to 50.8x speedup and 423x energy reduction relative to the CPU baselines.
arXiv Detail & Related papers (2023-09-27T20:58:33Z) - DGNN-Booster: A Generic FPGA Accelerator Framework For Dynamic Graph
Neural Network Inference [2.2721856484014373]
We propose DGNN-Booster, which is a novel Field-Programmable Gate Array (FPGA) accelerator framework for real-time DGNN inference.
We show that DGNN-Booster can achieve a speedup of up to 5.6x compared to the CPU baseline (6226R), 8.4x compared to the GPU baseline (A6000) and 2.1x compared to the FPGA baseline.
arXiv Detail & Related papers (2023-04-13T21:50:23Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - LL-GNN: Low Latency Graph Neural Networks on FPGAs for High Energy
Physics [45.666822327616046]
This work presents a novel reconfigurable architecture for Low Graph Neural Network (LL-GNN) designs for particle detectors.
The LL-GNN design advances the next generation of trigger systems by enabling sophisticated algorithms to process experimental data efficiently.
arXiv Detail & Related papers (2022-09-28T12:55:35Z) - Graph Neural Networks for Charged Particle Tracking on FPGAs [2.6402980149746913]
The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem.
Graph neural networks (GNNs) are a type of geometric deep learning algorithm that has successfully been applied to this task.
We introduce an automated translation workflow, integrated into a broader tool called $textthls4ml$, for converting GNNs into firmware for field-programmable gate arrays (FPGAs)
arXiv Detail & Related papers (2021-12-03T17:56:10Z) - BlockGNN: Towards Efficient GNN Acceleration Using Block-Circulant
Weight Matrices [9.406007544032848]
Graph Neural Networks (GNNs) are state-of-the-art algorithms for analyzing non-euclidean graph data.
How to inference GNNs in real time has become a challenging problem for some resource-limited edge-computing platforms.
We propose BlockGNN, a software- hardware co-design approach to realize efficient GNN acceleration.
arXiv Detail & Related papers (2021-04-13T14:09:22Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.