PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant
Aggregator Network for Particle Physics
- URL: http://arxiv.org/abs/2211.00454v1
- Date: Tue, 1 Nov 2022 13:36:50 GMT
- Title: PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant
Aggregator Network for Particle Physics
- Authors: Alexander Bogatskiy, Timothy Hoffman, David W. Miller, Jan T.
Offermann
- Abstract summary: We present a machine learning architecture that uses a set of inputs maximally reduced with respect to the full 6-dimensional Lorentz symmetry.
We show that the resulting network outperforms all existing competitors despite much lower model complexity.
- Score: 64.5726087590283
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Many current approaches to machine learning in particle physics use generic
architectures that require large numbers of parameters and disregard underlying
physics principles, limiting their applicability as scientific modeling tools.
In this work, we present a machine learning architecture that uses a set of
inputs maximally reduced with respect to the full 6-dimensional Lorentz
symmetry, and is fully permutation-equivariant throughout. We study the
application of this network architecture to the standard task of top quark
tagging and show that the resulting network outperforms all existing
competitors despite much lower model complexity. In addition, we present a
Lorentz-covariant variant of the same network applied to a 4-momentum
regression task.
Related papers
- Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics [4.4970885242855845]
Lorentz Geometric Algebra Transformer (L-GATr) is a new multi-purpose architecture for high-energy physics.
L-GATr is first demonstrated on regression and classification tasks from particle physics.
We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network.
arXiv Detail & Related papers (2024-05-23T17:15:41Z) - Universal Neural Functionals [67.80283995795985]
A challenging problem in many modern machine learning tasks is to process weight-space features.
Recent works have developed promising weight-space models that are equivariant to the permutation symmetries of simple feedforward networks.
This work proposes an algorithm that automatically constructs permutation equivariant models for any weight space.
arXiv Detail & Related papers (2024-02-07T20:12:27Z) - Explainable Equivariant Neural Networks for Particle Physics: PELICAN [51.02649432050852]
PELICAN is a novel permutation equivariant and Lorentz invariant aggregator network.
We present a study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks.
We extend the application of PELICAN to the tasks of identifying quark-initiated vs.gluon-initiated jets, and a multi-class identification across five separate target categories of jets.
arXiv Detail & Related papers (2023-07-31T09:08:40Z) - FAENet: Frame Averaging Equivariant GNN for Materials Modeling [123.19473575281357]
We introduce a flexible framework relying on frameaveraging (SFA) to make any model E(3)-equivariant or invariant through data transformations.
We prove the validity of our method theoretically and empirically demonstrate its superior accuracy and computational scalability in materials modeling.
arXiv Detail & Related papers (2023-04-28T21:48:31Z) - Semi-Equivariant GNN Architectures for Jet Tagging [1.6626046865692057]
We present the novel architecture VecNet that combines symmetry-respecting and unconstrained operations to study and tune the degree of physics-informed GNNs.
We find that a generalized architecture such as ours can deliver optimal performance in resource-constrained applications.
arXiv Detail & Related papers (2022-02-14T18:57:12Z) - Generalization capabilities of neural networks in lattice applications [0.0]
We investigate the advantages of adopting translationally equivariant neural networks in favor of non-equivariant ones.
We show that our best equivariant architectures can perform and generalize significantly better than their non-equivariant counterparts.
arXiv Detail & Related papers (2021-12-23T11:48:06Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Lorentz Group Equivariant Neural Network for Particle Physics [58.56031187968692]
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group.
For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters.
arXiv Detail & Related papers (2020-06-08T17:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.