Learning equivariant models by discovering symmetries with learnable augmentations
- URL: http://arxiv.org/abs/2506.03914v1
- Date: Wed, 04 Jun 2025 13:06:05 GMT
- Title: Learning equivariant models by discovering symmetries with learnable augmentations
- Authors: Eduardo Santos Escriche, Stefanie Jegelka,
- Abstract summary: SEMoLA is an end-to-end approach that jointly discovers a priori unknown symmetries in the data via learnable data augmentations.<n>We demonstrate the ability of SEMoLA to robustly discover relevant symmetries while achieving high prediction accuracy across various datasets.
- Score: 30.258169775217926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, a trend has emerged that favors learning relevant symmetries from data in geometric domains instead of designing constrained architectures. To do so, two popular options are (1) to modify the training protocol, e.g., with a specific loss and data augmentations (soft equivariance), or (2) to ignore equivariance and infer it only implicitly. However, both options have limitations: soft equivariance requires a priori knowledge about relevant symmetries, while inferring symmetries merely via the task and larger data lacks interpretability. To address both limitations, we propose SEMoLA, an end-to-end approach that jointly (1) discovers a priori unknown symmetries in the data via learnable data augmentations, and (2) softly encodes the respective approximate equivariance into an arbitrary unconstrained model. Hence, it does not need prior knowledge about symmetries, it offers interpretability, and it maintains robustness to distribution shifts. Empirically, we demonstrate the ability of SEMoLA to robustly discover relevant symmetries while achieving high prediction accuracy across various datasets, encompassing multiple data modalities and underlying symmetry groups.
Related papers
- On Equivariant Model Selection through the Lens of Uncertainty [49.137341292207]
Equivariant models leverage prior knowledge on symmetries to improve predictive performance, but misspecified architectural constraints can harm it instead.<n>We compare frequentist (via Conformal Prediction), Bayesian (via the marginal likelihood), and calibration-based measures to naive error-based evaluation.<n>We find that uncertainty metrics generally align with predictive performance, but Bayesian model evidence does so inconsistently.
arXiv Detail & Related papers (2025-06-23T13:35:06Z) - Learning (Approximately) Equivariant Networks via Constrained Optimization [25.51476313302483]
Equivariant neural networks are designed to respect symmetries through their architecture.<n>Real-world data often departs from perfect symmetry because of noise, structural variation, measurement bias, or other symmetry-breaking effects.<n>We introduce Adaptive Constrained Equivariance (ACE), a constrained optimization approach that starts with a flexible, non-equivariant model.
arXiv Detail & Related papers (2025-05-19T18:08:09Z) - Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.<n>Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - Symmetry Discovery for Different Data Types [52.2614860099811]
Equivariant neural networks incorporate symmetries into their architecture, achieving higher generalization performance.
We propose LieSD, a method for discovering symmetries via trained neural networks which approximate the input-output mappings of the tasks.
We validate the performance of LieSD on tasks with symmetries such as the two-body problem, the moment of inertia matrix prediction, and top quark tagging.
arXiv Detail & Related papers (2024-10-13T13:39:39Z) - SymmetryLens: A new candidate paradigm for unsupervised symmetry learning via locality and equivariance [0.0]
We develop a new, unsupervised symmetry learning method that starts with raw data.
We demonstrate that this coupling between symmetry and locality, together with a special optimization technique developed for entropy estimation, results in a highly stable system.
The symmetry actions we consider are group representations, however, we believe the approach has the potential to be generalized to more general, nonlinear actions of non-commutative Lie groups.
arXiv Detail & Related papers (2024-10-07T17:40:51Z) - Symmetry Breaking and Equivariant Neural Networks [17.740760773905986]
We introduce a novel notion of'relaxed equiinjection'
We show how to incorporate this relaxation into equivariant multilayer perceptronrons (E-MLPs)
The relevance of symmetry breaking is then discussed in various application domains.
arXiv Detail & Related papers (2023-12-14T15:06:48Z) - Learning Layer-wise Equivariances Automatically using Gradients [66.81218780702125]
Convolutions encode equivariance symmetries into neural networks leading to better generalisation performance.
symmetries provide fixed hard constraints on the functions a network can represent, need to be specified in advance, and can not be adapted.
Our goal is to allow flexible symmetry constraints that can automatically be learned from data using gradients.
arXiv Detail & Related papers (2023-10-09T20:22:43Z) - Learning Invariances in Neural Networks [51.20867785006147]
We show how to parameterize a distribution over augmentations and optimize the training loss simultaneously with respect to the network parameters and augmentation parameters.
We can recover the correct set and extent of invariances on image classification, regression, segmentation, and molecular property prediction from a large space of augmentations.
arXiv Detail & Related papers (2020-10-22T17:18:48Z) - Graph Embedding with Data Uncertainty [113.39838145450007]
spectral-based subspace learning is a common data preprocessing step in many machine learning pipelines.
Most subspace learning methods do not take into consideration possible measurement inaccuracies or artifacts that can lead to data with high uncertainty.
arXiv Detail & Related papers (2020-09-01T15:08:23Z) - Meta-Learning Symmetries by Reparameterization [63.85144439337671]
We present a method for learning and encoding equivariances into networks by learning corresponding parameter sharing patterns from data.
Our experiments suggest that it can automatically learn to encode equivariances to common transformations used in image processing tasks.
arXiv Detail & Related papers (2020-07-06T17:59:54Z) - Inverse Learning of Symmetries [71.62109774068064]
We learn the symmetry transformation with a model consisting of two latent subspaces.
Our approach is based on the deep information bottleneck in combination with a continuous mutual information regulariser.
Our model outperforms state-of-the-art methods on artificial and molecular datasets.
arXiv Detail & Related papers (2020-02-07T13:48:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.