Jet tagging in the Lund plane with graph networks
- URL: http://arxiv.org/abs/2012.08526v2
- Date: Thu, 11 Feb 2021 12:08:25 GMT
- Title: Jet tagging in the Lund plane with graph networks
- Authors: Fr\'ed\'eric A. Dreyer and Huilin Qu
- Abstract summary: LundNet is a novel jet tagging method which relies on graph neural networks and an efficient description of the radiation patterns within a jet.
We show significantly improved performance for top tagging compared to existing state-of-the-art algorithms.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The identification of boosted heavy particles such as top quarks or vector
bosons is one of the key problems arising in experimental studies at the Large
Hadron Collider. In this article, we introduce LundNet, a novel jet tagging
method which relies on graph neural networks and an efficient description of
the radiation patterns within a jet to optimally disentangle signatures of
boosted objects from background events. We apply this framework to a number of
different benchmarks, showing significantly improved performance for top
tagging compared to existing state-of-the-art algorithms. We study the
robustness of the LundNet taggers to non-perturbative and detector effects, and
show how kinematic cuts in the Lund plane can mitigate overfitting of the
neural network to model-dependent contributions. Finally, we consider the
computational complexity of this method and its scaling as a function of
kinematic Lund plane cuts, showing an order of magnitude improvement in speed
over previous graph-based taggers.
Related papers
- Faster Inference Time for GNNs using coarsening [1.323700980948722]
coarsening-based methods are used to reduce the graph into a smaller one, resulting in faster computation.
No previous research has tackled the cost during the inference.
This paper presents a novel approach to improve the scalability of GNNs through subgraph-based techniques.
arXiv Detail & Related papers (2024-10-19T06:27:24Z) - Graph Transformers for Large Graphs [57.19338459218758]
This work advances representation learning on single large-scale graphs with a focus on identifying model characteristics and critical design constraints.
A key innovation of this work lies in the creation of a fast neighborhood sampling technique coupled with a local attention mechanism.
We report a 3x speedup and 16.8% performance gain on ogbn-products and snap-patents, while we also scale LargeGT on ogbn-100M with a 5.9% performance improvement.
arXiv Detail & Related papers (2023-12-18T11:19:23Z) - PCN: A Deep Learning Approach to Jet Tagging Utilizing Novel Graph Construction Methods and Chebyshev Graph Convolutions [0.0]
Jet tagging is a classification problem in high-energy physics experiments.
Current approaches use deep learning to uncover hidden patterns in complex collision data.
We propose a graph-based representation of a jet that encodes the most information possible.
arXiv Detail & Related papers (2023-09-12T23:20:19Z) - Fast and Effective GNN Training with Linearized Random Spanning Trees [20.73637495151938]
We present a new effective and scalable framework for training GNNs in node classification tasks.
Our approach progressively refines the GNN weights on an extensive sequence of random spanning trees.
The sparse nature of these path graphs substantially lightens the computational burden of GNN training.
arXiv Detail & Related papers (2023-06-07T23:12:42Z) - Hyper-GST: Predict Metro Passenger Flow Incorporating GraphSAGE,
Hypergraph, Social-meaningful Edge Weights and Temporal Exploitation [4.698632626407558]
Graph-based deep learning algorithms could utilise the graph structure but raise a few challenges.
This study proposes a model based on GraphSAGE with an edge weights learner applied.
Hypergraph and temporal exploitation modules are also constructed as add-ons for better performance.
arXiv Detail & Related papers (2022-11-09T16:04:45Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Towards Deeper Graph Neural Networks [63.46470695525957]
Graph convolutions perform neighborhood aggregation and represent one of the most important graph operations.
Several recent studies attribute this performance deterioration to the over-smoothing issue.
We propose Deep Adaptive Graph Neural Network (DAGNN) to adaptively incorporate information from large receptive fields.
arXiv Detail & Related papers (2020-07-18T01:11:14Z) - Fast Graph Attention Networks Using Effective Resistance Based Graph
Sparsification [70.50751397870972]
FastGAT is a method to make attention based GNNs lightweight by using spectral sparsification to generate an optimal pruning of the input graph.
We experimentally evaluate FastGAT on several large real world graph datasets for node classification tasks.
arXiv Detail & Related papers (2020-06-15T22:07:54Z) - Structural Temporal Graph Neural Networks for Anomaly Detection in
Dynamic Graphs [54.13919050090926]
We propose an end-to-end structural temporal Graph Neural Network model for detecting anomalous edges in dynamic graphs.
In particular, we first extract the $h$-hop enclosing subgraph centered on the target edge and propose the node labeling function to identify the role of each node in the subgraph.
Based on the extracted features, we utilize Gated recurrent units (GRUs) to capture the temporal information for anomaly detection.
arXiv Detail & Related papers (2020-05-15T09:17:08Z) - Towards an Efficient and General Framework of Robust Training for Graph
Neural Networks [96.93500886136532]
Graph Neural Networks (GNNs) have made significant advances on several fundamental inference tasks.
Despite GNNs' impressive performance, it has been observed that carefully crafted perturbations on graph structures lead them to make wrong predictions.
We propose a general framework which leverages the greedy search algorithms and zeroth-order methods to obtain robust GNNs.
arXiv Detail & Related papers (2020-02-25T15:17:58Z) - Ripple Walk Training: A Subgraph-based training framework for Large and
Deep Graph Neural Network [10.36962234388739]
We propose a general subgraph-based training framework, namely Ripple Walk Training (RWT), for deep and large graph neural networks.
RWT samples subgraphs from the full graph to constitute a mini-batch, and the full GNN is updated based on the mini-batch gradient.
Extensive experiments on different sizes of graphs demonstrate the effectiveness and efficiency of RWT in training various GNNs.
arXiv Detail & Related papers (2020-02-17T19:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.