A case study of sending graph neural networks back to the test bench for
applications in high-energy particle physics
- URL: http://arxiv.org/abs/2402.17386v1
- Date: Tue, 27 Feb 2024 10:26:25 GMT
- Title: A case study of sending graph neural networks back to the test bench for
applications in high-energy particle physics
- Authors: Emanuel Pfeffer and Michael Wa{\ss}mer and Yee-Ying Cung and Roger
Wolf and Ulrich Husemann
- Abstract summary: In high-energy particle collisions the primary collision products usually decay further resulting in tree-like, hierarchical structures with a priori unknown multiplicity.
The analogy to mathematical graphs gives rise to the idea that graph neural networks (GNNs) should be best-suited to address many tasks related to high-energy particle physics.
We describe a benchmark test of a typical GNN against neural networks of the well-established deep fully-connected feed-forward architecture.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In high-energy particle collisions, the primary collision products usually
decay further resulting in tree-like, hierarchical structures with a priori
unknown multiplicity. At the stable-particle level all decay products of a
collision form permutation invariant sets of final state objects. The analogy
to mathematical graphs gives rise to the idea that graph neural networks
(GNNs), which naturally resemble these properties, should be best-suited to
address many tasks related to high-energy particle physics. In this paper we
describe a benchmark test of a typical GNN against neural networks of the
well-established deep fully-connected feed-forward architecture. We aim at
performing this comparison maximally unbiased in terms of nodes, hidden layers,
or trainable parameters of the neural networks under study. As physics case we
use the classification of the final state X produced in association with top
quark-antiquark pairs in proton-proton collisions at the Large Hadron Collider
at CERN, where X stands for a bottom quark-antiquark pair produced either
non-resonantly or through the decay of an intermediately produced Z or Higgs
boson.
Related papers
- Generalization Bounds in Hybrid Quantum-Classical Machine Learning Models [0.0]
We develop a unified mathematical framework for analyzing generalization in hybrid models.
We apply our results to the quantum-classical convolutional neural network (QCCNN)
arXiv Detail & Related papers (2025-04-11T11:35:03Z) - Generalization of Graph Neural Networks is Robust to Model Mismatch [84.01980526069075]
Graph neural networks (GNNs) have demonstrated their effectiveness in various tasks supported by their generalization capabilities.
In this paper, we examine GNNs that operate on geometric graphs generated from manifold models.
Our analysis reveals the robustness of the GNN generalization in the presence of such model mismatch.
arXiv Detail & Related papers (2024-08-25T16:00:44Z) - A Comparison Between Invariant and Equivariant Classical and Quantum Graph Neural Networks [3.350407101925898]
Deep geometric methods, such as graph neural networks (GNNs), have been leveraged for various data analysis tasks in high-energy physics.
One typical task is jet tagging, where jets are viewed as point clouds with distinct features and edge connections between their constituent particles.
In this paper, we perform a fair and comprehensive comparison between classical graph neural networks (GNNs) and their quantum counterparts.
arXiv Detail & Related papers (2023-11-30T16:19:13Z) - Wide Neural Networks as Gaussian Processes: Lessons from Deep
Equilibrium Models [16.07760622196666]
We study the deep equilibrium model (DEQ), an infinite-depth neural network with shared weight matrices across layers.
Our analysis reveals that as the width of DEQ layers approaches infinity, it converges to a Gaussian process.
Remarkably, this convergence holds even when the limits of depth and width are interchanged.
arXiv Detail & Related papers (2023-10-16T19:00:43Z) - Neural network approach to quasiparticle dispersions in doped
antiferromagnets [0.0]
We study the ability of neural quantum states to represent the bosonic and fermionic $t-J$ model on different 1D and 2D lattices.
We present a method to calculate dispersion relations from the neural network state representation.
arXiv Detail & Related papers (2023-10-12T17:59:33Z) - QuanGCN: Noise-Adaptive Training for Robust Quantum Graph Convolutional
Networks [124.7972093110732]
We propose quantum graph convolutional networks (QuanGCN), which learns the local message passing among nodes with the sequence of crossing-gate quantum operations.
To mitigate the inherent noises from modern quantum devices, we apply sparse constraint to sparsify the nodes' connections.
Our QuanGCN is functionally comparable or even superior than the classical algorithms on several benchmark graph datasets.
arXiv Detail & Related papers (2022-11-09T21:43:16Z) - Transformer with Implicit Edges for Particle-based Physics Simulation [135.77656965678196]
Transformer with Implicit Edges (TIE) captures the rich semantics of particle interactions in an edge-free manner.
We evaluate our model on diverse domains of varying complexity and materials.
arXiv Detail & Related papers (2022-07-22T03:45:29Z) - Hybrid Quantum Classical Graph Neural Networks for Particle Track
Reconstruction [0.0]
Large Hadron Collider (LHC) will be upgraded to further increase the instantaneous rate of particle collisions (luminosity)
HL-LHC will yield many more detector hits, which will pose a challenge by using reconstruction algorithms to determine particle trajectories from those hits.
This work explores the possibility of converting a novel Graph Neural Network model to a Hybrid Quantum-Classical Graph Neural Network.
arXiv Detail & Related papers (2021-09-26T15:47:31Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - Variational Monte Carlo calculations of $\mathbf{A\leq 4}$ nuclei with
an artificial neural-network correlator ansatz [62.997667081978825]
We introduce a neural-network quantum state ansatz to model the ground-state wave function of light nuclei.
We compute the binding energies and point-nucleon densities of $Aleq 4$ nuclei as emerging from a leading-order pionless effective field theory Hamiltonian.
arXiv Detail & Related papers (2020-07-28T14:52:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.