Scaling Graph Neural Networks for Particle Track Reconstruction
- URL: http://arxiv.org/abs/2504.04670v1
- Date: Mon, 07 Apr 2025 01:44:32 GMT
- Title: Scaling Graph Neural Networks for Particle Track Reconstruction
- Authors: Alok Tripathy, Alina Lazar, Xiangyang Ju, Paolo Calafiura, Katherine Yelick, Aydin Buluc,
- Abstract summary: We introduce improvements to the Exa.TrkX pipeline to train on samples of input particle graphs.<n>We adapt performance optimizations, introduced for GNN training, to fit our augmented Exa.TrkX pipeline.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Particle track reconstruction is an important problem in high-energy physics (HEP), necessary to study properties of subatomic particles. Traditional track reconstruction algorithms scale poorly with the number of particles within the accelerator. The Exa.TrkX project, to alleviate this computational burden, introduces a pipeline that reduces particle track reconstruction to edge classification on a graph, and uses graph neural networks (GNNs) to produce particle tracks. However, this GNN-based approach is memory-prohibitive and skips graphs that would exceed GPU memory. We introduce improvements to the Exa.TrkX pipeline to train on samples of input particle graphs, and show that these improvements generalize to higher precision and recall. In addition, we adapt performance optimizations, introduced for GNN training, to fit our augmented Exa.TrkX pipeline. These optimizations provide a $2\times$ speedup over our baseline implementation in PyTorch Geometric.
Related papers
- EggNet: An Evolving Graph-based Graph Attention Network for Particle Track Reconstruction [0.0]
We consider a one-shot OC approach that reconstructs particle tracks directly from a set of hits.
This approach iteratively updates the graphs and can better facilitate the message passing across each graph.
Preliminary studies on the TrackML dataset show better track performance compared to the methods that require a fixed input graph.
arXiv Detail & Related papers (2024-07-18T22:29:24Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Spatiotemporal Forecasting Meets Efficiency: Causal Graph Process Neural Networks [5.703629317205571]
Causal Graph Graph Processes (CGPs) offer an alternative, using graph filters instead of relational field layers to reduce parameters and minimize memory consumption.
This paper introduces a non-linear model combining CGPs and GNNs fortemporal forecasting. CGProNet employs higher-order graph filters, optimizing the model with fewer parameters, reducing memory usage, and improving runtime efficiency.
arXiv Detail & Related papers (2024-05-29T08:37:48Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - Low Latency Edge Classification GNN for Particle Trajectory Tracking on
FPGAs [10.146819379097249]
This paper introduces a resource-efficient GNN architecture on FPGAs for low latency particle tracking.
Our results on Xilinx UltraScale+ VU9P demonstrate 1625x and 1574x performance improvement over CPU and GPU respectively.
arXiv Detail & Related papers (2023-06-20T06:57:24Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - BGL: GPU-Efficient GNN Training by Optimizing Graph Data I/O and
Preprocessing [0.0]
Graph neural networks (GNNs) have extended the success of deep neural networks (DNNs) to non-Euclidean graph data.
Existing systems are inefficient to train large graphs with billions of nodes and edges with GPUs.
This paper proposes BGL, a distributed GNN training system designed to address the bottlenecks with a few key ideas.
arXiv Detail & Related papers (2021-12-16T00:37:37Z) - Increase and Conquer: Training Graph Neural Networks on Growing Graphs [116.03137405192356]
We consider the problem of learning a graphon neural network (WNN) by training GNNs on graphs sampled Bernoulli from the graphon.
Inspired by these results, we propose an algorithm to learn GNNs on large-scale graphs that, starting from a moderate number of nodes, successively increases the size of the graph during training.
arXiv Detail & Related papers (2021-06-07T15:05:59Z) - Unrolling of Deep Graph Total Variation for Image Denoising [106.93258903150702]
In this paper, we combine classical graph signal filtering with deep feature learning into a competitive hybrid design.
We employ interpretable analytical low-pass graph filters and employ 80% fewer network parameters than state-of-the-art DL denoising scheme DnCNN.
arXiv Detail & Related papers (2020-10-21T20:04:22Z) - Distance-Weighted Graph Neural Networks on FPGAs for Real-Time Particle
Reconstruction in High Energy Physics [11.125632758828266]
We discuss how to design distance-weighted graph networks that can be executed with a latency of less than 1$mumathrms$ on an FPGA.
We consider a representative task associated to particle reconstruction and identification in a next-generation calorimeter operating at a particle collider.
We convert the compressed models into firmware to be implemented on an FPGA.
arXiv Detail & Related papers (2020-08-08T21:26:31Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.