Lorentz group equivariant autoencoders
- URL: http://arxiv.org/abs/2212.07347v2
- Date: Sat, 10 Jun 2023 23:28:30 GMT
- Title: Lorentz group equivariant autoencoders
- Authors: Zichun Hao, Raghav Kansal, Javier Duarte, Nadezda Chernyavskaya
- Abstract summary: Lorentz group autoencoder (LGAE)
We develop an autoencoder model equivariant with respect to the proper, orthochronous Lorentz group $mathrmSO+(2,1)$, with a latent space living in the representations of the group.
We present our architecture and several experimental results on jets at the LHC and find it outperforms graph and convolutional neural network baseline models on several compression, reconstruction, and anomaly detection metrics.
- Score: 6.858459233149096
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There has been significant work recently in developing machine learning (ML)
models in high energy physics (HEP) for tasks such as classification,
simulation, and anomaly detection. Often these models are adapted from those
designed for datasets in computer vision or natural language processing, which
lack inductive biases suited to HEP data, such as equivariance to its inherent
symmetries. Such biases have been shown to make models more performant and
interpretable, and reduce the amount of training data needed. To that end, we
develop the Lorentz group autoencoder (LGAE), an autoencoder model equivariant
with respect to the proper, orthochronous Lorentz group $\mathrm{SO}^+(3,1)$,
with a latent space living in the representations of the group. We present our
architecture and several experimental results on jets at the LHC and find it
outperforms graph and convolutional neural network baseline models on several
compression, reconstruction, and anomaly detection metrics. We also demonstrate
the advantage of such an equivariant model in analyzing the latent space of the
autoencoder, which can improve the explainability of potential anomalies
discovered by such ML models.
Related papers
- Lie Algebra Canonicalization: Equivariant Neural Operators under arbitrary Lie Groups [11.572188414440436]
We propose Lie aLgebrA Canonicalization (LieLAC), a novel approach that exploits only the action of infinitesimal generators of the symmetry group.
operating within the framework of canonicalization, LieLAC can easily be integrated with unconstrained pre-trained models.
arXiv Detail & Related papers (2024-10-03T17:21:30Z) - Provably Trainable Rotationally Equivariant Quantum Machine Learning [0.6435156676256051]
We introduce a family of rotationally equivariant QML models built upon the quantum Fourier transform.
We numerically test our models on a dataset of simulated scanning tunnelling microscope images of phosphorus impurities in silicon.
arXiv Detail & Related papers (2023-11-10T05:10:06Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Disentanglement via Latent Quantization [60.37109712033694]
In this work, we construct an inductive bias towards encoding to and decoding from an organized latent space.
We demonstrate the broad applicability of this approach by adding it to both basic data-re (vanilla autoencoder) and latent-reconstructing (InfoGAN) generative models.
arXiv Detail & Related papers (2023-05-28T06:30:29Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - VTAE: Variational Transformer Autoencoder with Manifolds Learning [144.0546653941249]
Deep generative models have demonstrated successful applications in learning non-linear data distributions through a number of latent variables.
The nonlinearity of the generator implies that the latent space shows an unsatisfactory projection of the data space, which results in poor representation learning.
We show that geodesics and accurate computation can substantially improve the performance of deep generative models.
arXiv Detail & Related papers (2023-04-03T13:13:19Z) - On the Generalization and Adaption Performance of Causal Models [99.64022680811281]
Differentiable causal discovery has proposed to factorize the data generating process into a set of modules.
We study the generalization and adaption performance of such modular neural causal models.
Our analysis shows that the modular neural causal models outperform other models on both zero and few-shot adaptation in low data regimes.
arXiv Detail & Related papers (2022-06-09T17:12:32Z) - Equivariant vector field network for many-body system modeling [65.22203086172019]
Equivariant Vector Field Network (EVFN) is built on a novel equivariant basis and the associated scalarization and vectorization layers.
We evaluate our method on predicting trajectories of simulated Newton mechanics systems with both full and partially observed data.
arXiv Detail & Related papers (2021-10-26T14:26:25Z) - Disentangled Representation Learning and Generation with Manifold
Optimization [10.69910379275607]
This work presents a representation learning framework that explicitly promotes disentanglement by encouraging directions of variations.
Our theoretical discussion and various experiments show that the proposed model improves over many VAE variants in terms of both generation quality and disentangled representation learning.
arXiv Detail & Related papers (2020-06-12T10:00:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.