Lie Group Symmetry Discovery and Enforcement Using Vector Fields
- URL: http://arxiv.org/abs/2505.08219v1
- Date: Tue, 13 May 2025 04:24:46 GMT
- Title: Lie Group Symmetry Discovery and Enforcement Using Vector Fields
- Authors: Ben Shaw, Sasidhar Kunapuli, Abram Magner, Kevin R. Moon,
- Abstract summary: We extend the notion of non-affine symmetry discovery to functions defined by neural networks.<n>We introduce symmetry enforcement of smooth models using vector fields.
- Score: 4.212344009251363
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Symmetry-informed machine learning can exhibit advantages over machine learning which fails to account for symmetry. Additionally, recent attention has been given to continuous symmetry discovery using vector fields which serve as infinitesimal generators for Lie group symmetries. In this paper, we extend the notion of non-affine symmetry discovery to functions defined by neural networks. We further extend work in this area by introducing symmetry enforcement of smooth models using vector fields. Finally, we extend work on symmetry discovery using vector fields by providing both theoretical and experimental material on the restriction of the symmetry search space to infinitesimal isometries.
Related papers
- Learning Infinitesimal Generators of Continuous Symmetries from Data [15.42275880523356]
We propose a novel symmetry learning algorithm based on transformations defined with one- parameter groups.<n>Our method is built upon minimal inductive biases, encompassing not only commonly utilized symmetries rooted in Lie groups but also extending to symmetries derived from nonlinear generators.
arXiv Detail & Related papers (2024-10-29T08:28:23Z) - SymmetryLens: Unsupervised Symmetry Learning via Locality and Density Preservation [0.0]
We develop a new unsupervised symmetry learning method that starts with raw data and provides the minimal generator of an underlying Lie group of symmetries.<n>The method is able to learn the pixel translation operator from a dataset with only an approximate translation symmetry.<n>We demonstrate that this coupling between symmetry and locality, together with an optimization technique developed for entropy estimation, results in a stable system.
arXiv Detail & Related papers (2024-10-07T17:40:51Z) - Latent Space Symmetry Discovery [31.28537696897416]
We propose a novel generative model, Latent LieGAN, which can discover symmetries of nonlinear group actions.
We show that our model can express nonlinear symmetries under some conditions about the group action.
LaLiGAN also results in a well-structured latent space that is useful for downstream tasks including equation discovery and long-term forecasting.
arXiv Detail & Related papers (2023-09-29T19:33:01Z) - Geometric Neural Diffusion Processes [55.891428654434634]
We extend the framework of diffusion models to incorporate a series of geometric priors in infinite-dimension modelling.
We show that with these conditions, the generative functional model admits the same symmetry.
arXiv Detail & Related papers (2023-07-11T16:51:38Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Oracle-Preserving Latent Flows [58.720142291102135]
We develop a methodology for the simultaneous discovery of multiple nontrivial continuous symmetries across an entire labelled dataset.
The symmetry transformations and the corresponding generators are modeled with fully connected neural networks trained with a specially constructed loss function.
The two new elements in this work are the use of a reduced-dimensionality latent space and the generalization to transformations invariant with respect to high-dimensional oracles.
arXiv Detail & Related papers (2023-02-02T00:13:32Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Neural Bregman Divergences for Distance Learning [60.375385370556145]
We propose a new approach to learning arbitrary Bregman divergences in a differentiable manner via input convex neural networks.
We show that our method more faithfully learns divergences over a set of both new and previously studied tasks.
Our tests further extend to known asymmetric, but non-Bregman tasks, where our method still performs competitively despite misspecification.
arXiv Detail & Related papers (2022-06-09T20:53:15Z) - Machine-learning hidden symmetries [0.0]
We present an automated method for finding hidden symmetries, defined as symmetries that become manifest only in a new coordinate system that must be discovered.
Its core idea is to quantify asymmetry as violation of certain partial differential equations, and to numerically minimize such violation over the space of all invertible transformations, parametrized as invertible neural networks.
arXiv Detail & Related papers (2021-09-20T17:55:02Z) - Symmetry Breaking in Symmetric Tensor Decomposition [44.181747424363245]
We consider the nonsymmetry problem associated with computing the points rank decomposition of symmetric tensors.
We show that critical points the loss function is detected by standard methods.
arXiv Detail & Related papers (2021-03-10T18:11:22Z) - A Differential Geometry Perspective on Orthogonal Recurrent Models [56.09491978954866]
We employ tools and insights from differential geometry to offer a novel perspective on orthogonal RNNs.
We show that orthogonal RNNs may be viewed as optimizing in the space of divergence-free vector fields.
Motivated by this observation, we study a new recurrent model, which spans the entire space of vector fields.
arXiv Detail & Related papers (2021-02-18T19:39:22Z) - Finding Symmetry Breaking Order Parameters with Euclidean Neural
Networks [2.735801286587347]
We demonstrate that symmetry equivariant neural networks uphold Curie's principle and can be used to articulate many symmetry-relevant scientific questions into simple optimization problems.
We prove these properties mathematically and demonstrate them numerically by training a Euclidean symmetry equivariant neural network to learn symmetry-breaking input to deform a square into a rectangle and to generate octahedra tilting patterns in perovskites.
arXiv Detail & Related papers (2020-07-04T17:24:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.