Graph Neural Networks for Charged Particle Tracking on FPGAs
- URL: http://arxiv.org/abs/2112.02048v1
- Date: Fri, 3 Dec 2021 17:56:10 GMT
- Title: Graph Neural Networks for Charged Particle Tracking on FPGAs
- Authors: Abdelrahman Elabd and Vesal Razavimaleki and Shi-Yu Huang and Javier
Duarte and Markus Atkinson and Gage DeZoort and Peter Elmer and Jin-Xuan Hu
and Shih-Chieh Hsu and Bo-Cheng Lai and Mark Neubauer and Isobel Ojalvo and
Savannah Thais
- Abstract summary: The determination of charged particle trajectories in collisions at the CERN Large Hadron Collider (LHC) is an important but challenging problem.
Graph neural networks (GNNs) are a type of geometric deep learning algorithm that has successfully been applied to this task.
We introduce an automated translation workflow, integrated into a broader tool called $textthls4ml$, for converting GNNs into firmware for field-programmable gate arrays (FPGAs)
- Score: 2.6402980149746913
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The determination of charged particle trajectories in collisions at the CERN
Large Hadron Collider (LHC) is an important but challenging problem, especially
in the high interaction density conditions expected during the future
high-luminosity phase of the LHC (HL-LHC). Graph neural networks (GNNs) are a
type of geometric deep learning algorithm that has successfully been applied to
this task by embedding tracker data as a graph -- nodes represent hits, while
edges represent possible track segments -- and classifying the edges as true or
fake track segments. However, their study in hardware- or software-based
trigger applications has been limited due to their large computational cost. In
this paper, we introduce an automated translation workflow, integrated into a
broader tool called $\texttt{hls4ml}$, for converting GNNs into firmware for
field-programmable gate arrays (FPGAs). We use this translation tool to
implement GNNs for charged particle tracking, trained using the TrackML
challenge dataset, on FPGAs with designs targeting different graph sizes, task
complexites, and latency/throughput requirements. This work could enable the
inclusion of charged particle tracking GNNs at the trigger level for HL-LHC
experiments.
Related papers
- Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Low Latency Edge Classification GNN for Particle Trajectory Tracking on
FPGAs [10.146819379097249]
This paper introduces a resource-efficient GNN architecture on FPGAs for low latency particle tracking.
Our results on Xilinx UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU and GPU respectively.
arXiv Detail & Related papers (2023-06-20T06:57:24Z) - End-to-end codesign of Hessian-aware quantized neural networks for FPGAs
and ASICs [49.358119307844035]
We develop an end-to-end workflow for the training and implementation of co-designed neural networks (NNs)
This makes efficient NN implementations in hardware accessible to nonexperts, in a single open-sourced workflow.
We demonstrate the workflow in a particle physics application involving trigger decisions that must operate at the 40 MHz collision rate of the Large Hadron Collider (LHC)
We implement an optimized mixed-precision NN for high-momentum particle jets in simulated LHC proton-proton collisions.
arXiv Detail & Related papers (2023-04-13T18:00:01Z) - Charged particle tracking via edge-classifying interaction networks [0.0]
In this work, we adapt the physics-motivated interaction network (IN) GNN to the problem of charged-particle tracking in the high-pileup conditions expected at the HL-LHC.
We demonstrate the IN's excellent edge-classification accuracy and tracking efficiency through a suite of measurements at each stage of GNN-based tracking.
The proposed IN architecture is substantially smaller than previously studied GNN tracking architectures, a reduction in size critical for enabling GNN-based tracking in constrained computing environments.
arXiv Detail & Related papers (2021-03-30T21:58:52Z) - Instance Segmentation GNNs for One-Shot Conformal Tracking at the LHC [0.0]
Graph Neural Networks (GNNs) have shown promising performance on standard instance segmentation tasks.
We re-imagine the traditional Cartesian space approach to track-finding and instead work in a conformal geometry that allows the GNN to identify tracks and extract parameters in a single shot.
arXiv Detail & Related papers (2021-03-11T07:15:55Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z) - Learning to Drop: Robust Graph Neural Network via Topological Denoising [50.81722989898142]
We propose PTDNet, a parameterized topological denoising network, to improve the robustness and generalization performance of Graph Neural Networks (GNNs)
PTDNet prunes task-irrelevant edges by penalizing the number of edges in the sparsified graph with parameterized networks.
We show that PTDNet can improve the performance of GNNs significantly and the performance gain becomes larger for more noisy datasets.
arXiv Detail & Related papers (2020-11-13T18:53:21Z) - Learning to Execute Programs with Instruction Pointer Attention Graph
Neural Networks [55.98291376393561]
Graph neural networks (GNNs) have emerged as a powerful tool for learning software engineering tasks.
Recurrent neural networks (RNNs) are well-suited to long sequential chains of reasoning, but they do not naturally incorporate program structure.
We introduce a novel GNN architecture, the Instruction Pointer Attention Graph Neural Networks (IPA-GNN), which improves systematic generalization on the task of learning to execute programs.
arXiv Detail & Related papers (2020-10-23T19:12:30Z) - Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle
Reconstruction in High Energy Physics [11.125632758828266]
We discuss how to design distance-weighted graph networks that can be executed with a latency of less than 1$mumathrms$ on an FPGA.
We consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider.
We convert the compressed models into firmware to be implemented on an FPGA.
arXiv Detail & Related papers (2020-08-08T21:26:31Z) - Track Seeding and Labelling with Embedded-space Graph Neural Networks [3.5236955190576693]
The Exa.TrkX project is investigating machine learning approaches to particle track reconstruction.
The most promising of these solutions, graph neural networks (GNN), process the event as a graph that connects track measurements.
We report updates on the state-of-the-art architectures for this task.
arXiv Detail & Related papers (2020-06-30T23:43:28Z) - Graph Neural Networks for Motion Planning [108.51253840181677]
We present two techniques, GNNs over dense fixed graphs for low-dimensional problems and sampling-based GNNs for high-dimensional problems.
We examine the ability of a GNN to tackle planning problems such as identifying critical nodes or learning the sampling distribution in Rapidly-exploring Random Trees (RRT)
Experiments with critical sampling, a pendulum and a six DoF robot arm show GNNs improve on traditional analytic methods as well as learning approaches using fully-connected or convolutional neural networks.
arXiv Detail & Related papers (2020-06-11T08:19:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.