Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant
- URL: http://arxiv.org/abs/2505.20280v1
- Date: Mon, 26 May 2025 17:57:17 GMT
- Title: Lorentz Local Canonicalization: How to Make Any Network Lorentz-Equivariant
- Authors: Jonas Spinner, Luigi Favaro, Peter Lippmann, Sebastian Pitz, Gerrit Gerhartz, Tilman Plehn, Fred A. Hamprecht,
- Abstract summary: Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics.<n>We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant.<n>Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being $4times$ faster and using $5$-$100times$ fewer FLOPs.
- Score: 12.763777716363016
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Lorentz-equivariant neural networks are becoming the leading architectures for high-energy physics. Current implementations rely on specialized layers, limiting architectural choices. We introduce Lorentz Local Canonicalization (LLoCa), a general framework that renders any backbone network exactly Lorentz-equivariant. Using equivariantly predicted local reference frames, we construct LLoCa-transformers and graph networks. We adapt a recent approach to geometric message passing to the non-compact Lorentz group, allowing propagation of space-time tensorial features. Data augmentation emerges from LLoCa as a special choice of reference frame. Our models surpass state-of-the-art accuracy on relevant particle physics tasks, while being $4\times$ faster and using $5$-$100\times$ fewer FLOPs.
Related papers
- Lie-Equivariant Quantum Graph Neural Networks [4.051777802443125]
binary classification tasks are ubiquitous in analyses of the vast amounts of LHC data.
We develop a Lie-Equivariant Quantum Graph Neural Network (Lie-EQGNN), a quantum model that is not only data efficient, but also has symmetry-preserving properties.
arXiv Detail & Related papers (2024-11-22T19:15:13Z) - A Lorentz-Equivariant Transformer for All of the LHC [5.329375781648604]
We show that the Lorentz-Equivariant Geometric Algebra Transformer (L-GATr) yields state-of-the-art performance for a wide range of machine learning tasks at the Large Hadron Collider.
arXiv Detail & Related papers (2024-11-01T08:40:42Z) - Lorentz-Equivariant Geometric Algebra Transformers for High-Energy Physics [4.4970885242855845]
Lorentz Geometric Algebra Transformer (L-GATr) is a new multi-purpose architecture for high-energy physics.
L-GATr is first demonstrated on regression and classification tasks from particle physics.
We then construct the first Lorentz-equivariant generative model: a continuous normalizing flow based on an L-GATr network.
arXiv Detail & Related papers (2024-05-23T17:15:41Z) - Kronecker-Factored Approximate Curvature for Modern Neural Network
Architectures [85.76673783330334]
Two different settings of linear weight-sharing layers motivate two flavours of Kronecker-Factored Approximate Curvature (K-FAC)
We show they are exact for deep linear networks with weight-sharing in their respective setting.
We observe little difference between these two K-FAC variations when using them to train both a graph neural network and a vision transformer.
arXiv Detail & Related papers (2023-11-01T16:37:00Z) - 19 Parameters Is All You Need: Tiny Neural Networks for Particle Physics [52.42485649300583]
We present the potential of one recent Lorentz- and permutation-symmetric architecture, PELICAN, for low-latency neural network tasks.
We show its instances with as few as 19 trainable parameters that outperform generic architectures with tens of thousands of parameters when compared on the binary classification task of top quark jet tagging.
arXiv Detail & Related papers (2023-10-24T18:51:22Z) - Explainable Equivariant Neural Networks for Particle Physics: PELICAN [51.02649432050852]
PELICAN is a novel permutation equivariant and Lorentz invariant aggregator network.
We present a study of the PELICAN algorithm architecture in the context of both tagging (classification) and reconstructing (regression) Lorentz-boosted top quarks.
We extend the application of PELICAN to the tasks of identifying quark-initiated vs.gluon-initiated jets, and a multi-class identification across five separate target categories of jets.
arXiv Detail & Related papers (2023-07-31T09:08:40Z) - PELICAN: Permutation Equivariant and Lorentz Invariant or Covariant
Aggregator Network for Particle Physics [64.5726087590283]
We present a machine learning architecture that uses a set of inputs maximally reduced with respect to the full 6-dimensional Lorentz symmetry.
We show that the resulting network outperforms all existing competitors despite much lower model complexity.
arXiv Detail & Related papers (2022-11-01T13:36:50Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Lorentz Group Equivariant Neural Network for Particle Physics [58.56031187968692]
We present a neural network architecture that is fully equivariant with respect to transformations under the Lorentz group.
For classification tasks in particle physics, we demonstrate that such an equivariant architecture leads to drastically simpler models that have relatively few learnable parameters.
arXiv Detail & Related papers (2020-06-08T17:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.