Discovering Sparse Representations of Lie Groups with Machine Learning
- URL: http://arxiv.org/abs/2302.05383v1
- Date: Fri, 10 Feb 2023 17:12:05 GMT
- Title: Discovering Sparse Representations of Lie Groups with Machine Learning
- Authors: Roy T. Forestano, Konstantin T. Matchev, Katia Matcheva, Alexander
Roman, Eyup B. Unlu, Sarunas Verner
- Abstract summary: We show that our method reproduces the canonical representations of the generators of the Lorentz group.
This approach is completely general and can be used to find the infinitesimal generators for any Lie group.
- Score: 55.41644538483948
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent work has used deep learning to derive symmetry transformations, which
preserve conserved quantities, and to obtain the corresponding algebras of
generators. In this letter, we extend this technique to derive sparse
representations of arbitrary Lie algebras. We show that our method reproduces
the canonical (sparse) representations of the generators of the Lorentz group,
as well as the $U(n)$ and $SU(n)$ families of Lie groups. This approach is
completely general and can be used to find the infinitesimal generators for any
Lie group.
Related papers
- Lie Group Decompositions for Equivariant Neural Networks [12.139222986297261]
We show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task.
arXiv Detail & Related papers (2023-10-17T16:04:33Z) - Enriching Diagrams with Algebraic Operations [49.1574468325115]
We extend diagrammatic reasoning in monoidal categories with algebraic operations and equations.
We show how this construction can be used for diagrammatic reasoning of noise in quantum systems.
arXiv Detail & Related papers (2023-10-17T14:12:39Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - An Algorithm for Computing with Brauer's Group Equivariant Neural
Network Layers [0.0]
We present an algorithm for multiplying a vector by any weight matrix for each of these groups, using category theoretic constructions to implement the procedure.
We show that our approach extends to the symmetric group, $S_n$, recovering the algorithm of arXiv:2303.06208 in the process.
arXiv Detail & Related papers (2023-04-27T13:06:07Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.