Hierarchical Graph Neural Networks for Particle Track Reconstruction
- URL: http://arxiv.org/abs/2303.01640v1
- Date: Fri, 3 Mar 2023 00:14:32 GMT
- Title: Hierarchical Graph Neural Networks for Particle Track Reconstruction
- Authors: Ryan Liu, Paolo Calafiura, Steven Farrell, Xiangyang Ju, Daniel Thomas
Murnane, Tuan Minh Pham
- Abstract summary: We introduce a novel variant of GNN for particle tracking called Hierarchical Graph Neural Network (HGNN)
The architecture creates a set of higher-level representations which correspond to tracks and assigns spacepoints to these tracks, allowing disconnected spacepoints to be assigned to the same track, as well as multiple tracks to share the same spacepoint.
We show that, compared with previous ML-based tracking algorithms, the HGNN has better tracking efficiency performance, better robustness against inefficient input graphs, and better convergence compared with traditional GNNs.
- Score: 0.6524460254566905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce a novel variant of GNN for particle tracking called Hierarchical
Graph Neural Network (HGNN). The architecture creates a set of higher-level
representations which correspond to tracks and assigns spacepoints to these
tracks, allowing disconnected spacepoints to be assigned to the same track, as
well as multiple tracks to share the same spacepoint. We propose a novel
learnable pooling algorithm called GMPool to generate these higher-level
representations called "super-nodes", as well as a new loss function designed
for tracking problems and HGNN specifically. On a standard tracking problem, we
show that, compared with previous ML-based tracking algorithms, the HGNN has
better tracking efficiency performance, better robustness against inefficient
input graphs, and better convergence compared with traditional GNNs.
Related papers
- Tackling Oversmoothing in GNN via Graph Sparsification: A Truss-based Approach [1.4854797901022863]
We propose a novel and flexible truss-based graph sparsification model that prunes edges from dense regions of the graph.
We then utilize our sparsification model in the state-of-the-art baseline GNNs and pooling models, such as GIN, SAGPool, GMT, DiffPool, MinCutPool, HGP-SL, DMonPool, and AdamGNN.
arXiv Detail & Related papers (2024-07-16T17:21:36Z) - Spatio-Spectral Graph Neural Networks [50.277959544420455]
We propose Spatio-Spectral Graph Networks (S$2$GNNs)
S$2$GNNs combine spatially and spectrally parametrized graph filters.
We show that S$2$GNNs vanquish over-squashing and yield strictly tighter approximation-theoretic error bounds than MPGNNs.
arXiv Detail & Related papers (2024-05-29T14:28:08Z) - Efficient Heterogeneous Graph Learning via Random Projection [58.4138636866903]
Heterogeneous Graph Neural Networks (HGNNs) are powerful tools for deep learning on heterogeneous graphs.
Recent pre-computation-based HGNNs use one-time message passing to transform a heterogeneous graph into regular-shaped tensors.
We propose a hybrid pre-computation-based HGNN, named Random Projection Heterogeneous Graph Neural Network (RpHGNN)
arXiv Detail & Related papers (2023-10-23T01:25:44Z) - T-GAE: Transferable Graph Autoencoder for Network Alignment [79.89704126746204]
T-GAE is a graph autoencoder framework that leverages transferability and stability of GNNs to achieve efficient network alignment without retraining.
Our experiments demonstrate that T-GAE outperforms the state-of-the-art optimization method and the best GNN approach by up to 38.7% and 50.8%, respectively.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Cached Operator Reordering: A Unified View for Fast GNN Training [24.917363701638607]
Graph Neural Networks (GNNs) are a powerful tool for handling structured graph data and addressing tasks such as node classification, graph classification, and clustering.
However, the sparse nature of GNN computation poses new challenges for performance optimization compared to traditional deep neural networks.
We address these challenges by providing a unified view of GNN computation, I/O, and memory.
arXiv Detail & Related papers (2023-08-23T12:27:55Z) - LazyGNN: Large-Scale Graph Neural Networks via Lazy Propagation [51.552170474958736]
We propose to capture long-distance dependency in graphs by shallower models instead of deeper models, which leads to a much more efficient model, LazyGNN, for graph representation learning.
LazyGNN is compatible with existing scalable approaches (such as sampling methods) for further accelerations through the development of mini-batch LazyGNN.
Comprehensive experiments demonstrate its superior prediction performance and scalability on large-scale benchmarks.
arXiv Detail & Related papers (2023-02-03T02:33:07Z) - VQ-GNN: A Universal Framework to Scale up Graph Neural Networks using
Vector Quantization [70.8567058758375]
VQ-GNN is a universal framework to scale up any convolution-based GNNs using Vector Quantization (VQ) without compromising the performance.
Our framework avoids the "neighbor explosion" problem of GNNs using quantized representations combined with a low-rank version of the graph convolution matrix.
arXiv Detail & Related papers (2021-10-27T11:48:50Z) - Customizing Graph Neural Networks using Path Reweighting [23.698877985105312]
We propose a novel GNN solution, namely Customized Graph Neural Network with Path Reweighting (CustomGNN for short)
Specifically, the proposed CustomGNN can automatically learn the high-level semantics for specific downstream tasks to highlight semantically relevant paths as well to filter out task-irrelevant noises in a graph.
In experiments with the node classification task, CustomGNN achieves state-of-the-art accuracies on three standard graph datasets and four large graph datasets.
arXiv Detail & Related papers (2021-06-21T05:38:26Z) - Charged particle tracking via edge-classifying interaction networks [0.0]
In this work, we adapt the physics-motivated interaction network (IN) GNN to the problem of charged-particle tracking in the high-pileup conditions expected at the HL-LHC.
We demonstrate the IN's excellent edge-classification accuracy and tracking efficiency through a suite of measurements at each stage of GNN-based tracking.
The proposed IN architecture is substantially smaller than previously studied GNN tracking architectures, a reduction in size critical for enabling GNN-based tracking in constrained computing environments.
arXiv Detail & Related papers (2021-03-30T21:58:52Z) - Track Seeding and Labelling with Embedded-space Graph Neural Networks [3.5236955190576693]
The Exa.TrkX project is investigating machine learning approaches to particle track reconstruction.
The most promising of these solutions, graph neural networks (GNN), process the event as a graph that connects track measurements.
We report updates on the state-of-the-art architectures for this task.
arXiv Detail & Related papers (2020-06-30T23:43:28Z) - Towards Deeper Graph Neural Networks with Differentiable Group
Normalization [61.20639338417576]
Graph neural networks (GNNs) learn the representation of a node by aggregating its neighbors.
Over-smoothing is one of the key issues which limit the performance of GNNs as the number of layers increases.
We introduce two over-smoothing metrics and a novel technique, i.e., differentiable group normalization (DGN)
arXiv Detail & Related papers (2020-06-12T07:18:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.