Semi-Equivariant GNN Architectures for Jet Tagging
- URL: http://arxiv.org/abs/2202.06941v1
- Date: Mon, 14 Feb 2022 18:57:12 GMT
- Title: Semi-Equivariant GNN Architectures for Jet Tagging
- Authors: Daniel Murnane, Savannah Thais and Jason Wong
- Abstract summary: We present the novel architecture VecNet that combines symmetry-respecting and unconstrained operations to study and tune the degree of physics-informed GNNs.
We find that a generalized architecture such as ours can deliver optimal performance in resource-constrained applications.
- Score: 1.6626046865692057
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Composing Graph Neural Networks (GNNs) of operations that respect physical
symmetries has been suggested to give better model performance with a smaller
number of learnable parameters. However, real-world applications, such as in
high energy physics have not born this out. We present the novel architecture
VecNet that combines both symmetry-respecting and unconstrained operations to
study and tune the degree of physics-informed GNNs. We introduce a novel
metric, the \textit{ant factor}, to quantify the resource-efficiency of each
configuration in the search-space. We find that a generalized architecture such
as ours can deliver optimal performance in resource-constrained applications.
Related papers
- Optimal Equivariant Architectures from the Symmetries of Matrix-Element Likelihoods [0.0]
Matrix-Element Method (MEM) has long been a cornerstone of data analysis in high-energy physics.
geometric deep learning has enabled neural network architectures that incorporate known symmetries directly into their design.
This paper presents a novel approach that combines MEM-inspired symmetry considerations with equivariant neural network design for particle physics analysis.
arXiv Detail & Related papers (2024-10-24T08:56:37Z) - Enhancing lattice kinetic schemes for fluid dynamics with Lattice-Equivariant Neural Networks [79.16635054977068]
We present a new class of equivariant neural networks, dubbed Lattice-Equivariant Neural Networks (LENNs)
Our approach develops within a recently introduced framework aimed at learning neural network-based surrogate models Lattice Boltzmann collision operators.
Our work opens towards practical utilization of machine learning-augmented Lattice Boltzmann CFD in real-world simulations.
arXiv Detail & Related papers (2024-05-22T17:23:15Z) - A quatum inspired neural network for geometric modeling [14.214656118952178]
We introduce an innovative equivariant Matrix Product State (MPS)-based message-passing strategy.
Our method effectively models complex many-body relationships, suppressing mean-field approximations.
It seamlessly replaces the standard message-passing and layer-aggregation modules intrinsic to geometric GNNs.
arXiv Detail & Related papers (2024-01-03T15:59:35Z) - 19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics [52.42485649300583]
We present the potential of one recent Lorentz- and permutation-symmetric architecture, PELICAN, for low-latency neural network tasks.
We show its instances with as few as 19 trainable parameters that outperform generic architectures with tens of thousands of parameters when compared on the binary classification task of top quark jet tagging.
arXiv Detail & Related papers (2023-10-24T18:51:22Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Equivariant Graph Neural Networks for Charged Particle Tracking [1.6626046865692057]
EuclidNet is a novel symmetry-equivariant GNN for charged particle tracking.
We benchmark it against the state-of-the-art Interaction Network on the TrackML dataset.
Our results show that EuclidNet achieves near-state-of-the-art performance at small model scales.
arXiv Detail & Related papers (2023-04-11T15:43:32Z) - Connectivity Optimized Nested Graph Networks for Crystal Structures [1.1470070927586016]
Graph neural networks (GNNs) have been applied to a large variety of applications in materials science and chemistry.
We show that our suggested models systematically improve state-of-the-art results across all tasks within the MatBench benchmark.
arXiv Detail & Related papers (2023-02-27T19:26:48Z) - PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant
Aggregator Network for Particle Physics [64.5726087590283]
We present a machine learning architecture that uses a set of inputs maximally reduced with respect to the full 6-dimensional Lorentz symmetry.
We show that the resulting network outperforms all existing competitors despite much lower model complexity.
arXiv Detail & Related papers (2022-11-01T13:36:50Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Universal approximation property of invertible neural networks [76.95927093274392]
Invertible neural networks (INNs) are neural network architectures with invertibility by design.
Thanks to their invertibility and the tractability of Jacobian, INNs have various machine learning applications such as probabilistic modeling, generative modeling, and representation learning.
arXiv Detail & Related papers (2022-04-15T10:45:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.