Learning symmetries in datasets
- URL: http://arxiv.org/abs/2504.05174v1
- Date: Mon, 07 Apr 2025 15:17:41 GMT
- Title: Learning symmetries in datasets
- Authors: Veronica Sanz,
- Abstract summary: We investigate how symmetries present in datasets affect the structure of the latent space learned by Variational Autoencoders (VAEs)<n>We show that when symmetries or approximate symmetries are present, the VAE self-organizes its latent space, effectively compressing the data along a reduced number of latent variables.<n>Our results highlight the potential of unsupervised generative models to expose underlying structures in data and offer a novel approach to symmetry discovery without explicit supervision.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: We investigate how symmetries present in datasets affect the structure of the latent space learned by Variational Autoencoders (VAEs). By training VAEs on data originating from simple mechanical systems and particle collisions, we analyze the organization of the latent space through a relevance measure that identifies the most meaningful latent directions. We show that when symmetries or approximate symmetries are present, the VAE self-organizes its latent space, effectively compressing the data along a reduced number of latent variables. This behavior captures the intrinsic dimensionality determined by the symmetry constraints and reveals hidden relations among the features. Furthermore, we provide a theoretical analysis of a simple toy model, demonstrating how, under idealized conditions, the latent space aligns with the symmetry directions of the data manifold. We illustrate these findings with examples ranging from two-dimensional datasets with $O(2)$ symmetry to realistic datasets from electron-positron and proton-proton collisions. Our results highlight the potential of unsupervised generative models to expose underlying structures in data and offer a novel approach to symmetry discovery without explicit supervision.
Related papers
- Finsler Multi-Dimensional Scaling: Manifold Learning for Asymmetric Dimensionality Reduction and Embedding [41.601022263772535]
Dimensionality reduction aims to simplify complex data by reducing its feature dimensionality while preserving essential patterns, with core applications in data analysis and visualisation.<n>To preserve the underlying data structure, multi-dimensional scaling (MDS) methods focus on preserving pairwise dissimilarities, such as distances.
arXiv Detail & Related papers (2025-03-23T10:03:22Z) - Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.
Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - A Generative Model of Symmetry Transformations [44.87295754993983]
We build a generative model that explicitly aims to capture the data's approximate symmetries.
We empirically demonstrate its ability to capture symmetries under affine and color transformations.
arXiv Detail & Related papers (2024-03-04T11:32:18Z) - Building symmetries into data-driven manifold dynamics models for complex flows: application to two-dimensional Kolmogorov flow [0.0]
Many flows obey symmetries, and the present work illustrates how these can be exploited to yield highly efficient low-dimensional data-driven models for chaotic flows.
In particular, incorporating symmetries both guarantees that the reduced order model automatically respects them and dramatically increases the effective density of data sampling.
arXiv Detail & Related papers (2023-12-15T22:05:21Z) - Latent Space Symmetry Discovery [31.28537696897416]
We propose a novel generative model, Latent LieGAN, which can discover symmetries of nonlinear group actions.
We show that our model can express nonlinear symmetries under some conditions about the group action.
LaLiGAN also results in a well-structured latent space that is useful for downstream tasks including equation discovery and long-term forecasting.
arXiv Detail & Related papers (2023-09-29T19:33:01Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Capturing dynamical correlations using implicit neural representations [85.66456606776552]
We develop an artificial intelligence framework which combines a neural network trained to mimic simulated data from a model Hamiltonian with automatic differentiation to recover unknown parameters from experimental data.
In doing so, we illustrate the ability to build and train a differentiable model only once, which then can be applied in real-time to multi-dimensional scattering data.
arXiv Detail & Related papers (2023-04-08T07:55:36Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Analytical Modelling of Exoplanet Transit Specroscopy with Dimensional
Analysis and Symbolic Regression [68.8204255655161]
The deep learning revolution has opened the door for deriving such analytical results directly with a computer algorithm fitting to the data.
We successfully demonstrate the use of symbolic regression on synthetic data for the transit radii of generic hot Jupiter exoplanets.
As a preprocessing step, we use dimensional analysis to identify the relevant dimensionless combinations of variables.
arXiv Detail & Related papers (2021-12-22T00:52:56Z) - GELATO: Geometrically Enriched Latent Model for Offline Reinforcement
Learning [54.291331971813364]
offline reinforcement learning approaches can be divided into proximal and uncertainty-aware methods.
In this work, we demonstrate the benefit of combining the two in a latent variational model.
Our proposed metrics measure both the quality of out of distribution samples as well as the discrepancy of examples in the data.
arXiv Detail & Related papers (2021-02-22T19:42:40Z) - Inverse Learning of Symmetries [71.62109774068064]
We learn the symmetry transformation with a model consisting of two latent subspaces.
Our approach is based on the deep information bottleneck in combination with a continuous mutual information regulariser.
Our model outperforms state-of-the-art methods on artificial and molecular datasets.
arXiv Detail & Related papers (2020-02-07T13:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.