Learning Irreducible Representations of Noncommutative Lie Groups
- URL: http://arxiv.org/abs/2006.00724v2
- Date: Sun, 4 Oct 2020 00:58:16 GMT
- Title: Learning Irreducible Representations of Noncommutative Lie Groups
- Authors: Noah Shutty and Casimir Wierzynski
- Abstract summary: Recent work has constructed neural networks that are equivariant to continuous symmetry groups such as 2D and 3D rotations.
We present two contributions motivated by frontier applications of equivariance beyond rotations and translations.
- Score: 3.1727619150610837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has constructed neural networks that are equivariant to
continuous symmetry groups such as 2D and 3D rotations. This is accomplished
using explicit group representations to derive the equivariant kernels and
nonlinearities. We present two contributions motivated by frontier applications
of equivariance beyond rotations and translations. First, we relax the
requirement for explicit Lie group representations, presenting a novel
algorithm that finds irreducible representations of noncommutative Lie groups
given only the structure constants of the associated Lie algebra. Second, we
demonstrate that Lorentz-equivariance is a useful prior for object-tracking
tasks and construct the first object-tracking model equivariant to the
Poincar\'e group.
Related papers
- Relative Representations: Topological and Geometric Perspectives [53.88896255693922]
Relative representations are an established approach to zero-shot model stitching.
We introduce a normalization procedure in the relative transformation, resulting in invariance to non-isotropic rescalings and permutations.
Second, we propose to deploy topological densification when fine-tuning relative representations, a topological regularization loss encouraging clustering within classes.
arXiv Detail & Related papers (2024-09-17T08:09:22Z) - A Characterization Theorem for Equivariant Networks with Point-wise
Activations [13.00676132572457]
We prove that rotation-equivariant networks can only be invariant, as it happens for any network which is equivariant with respect to connected compact groups.
We show that feature spaces of disentangled steerable convolutional neural networks are trivial representations.
arXiv Detail & Related papers (2024-01-17T14:30:46Z) - Lie Group Decompositions for Equivariant Neural Networks [12.139222986297261]
We show how convolution kernels can be parametrized to build models equivariant with respect to affine transformations.
We evaluate the robustness and out-of-distribution generalisation capability of our model on the benchmark affine-invariant classification task.
arXiv Detail & Related papers (2023-10-17T16:04:33Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Capacity of Group-invariant Linear Readouts from Equivariant
Representations: How Many Objects can be Linearly Classified Under All
Possible Views? [21.06669693699965]
We find that the fraction of separable dichotomies is determined by the dimension of the space that is fixed by the group action.
We show how this relation extends to operations such as convolutions, element-wise nonlinearities, and global and local pooling.
arXiv Detail & Related papers (2021-10-14T15:46:53Z) - Convolutional Filtering and Neural Networks with Non Commutative
Algebras [153.20329791008095]
We study the generalization of non commutative convolutional neural networks.
We show that non commutative convolutional architectures can be stable to deformations on the space of operators.
arXiv Detail & Related papers (2021-08-23T04:22:58Z) - Commutative Lie Group VAE for Disentanglement Learning [96.32813624341833]
We view disentanglement learning as discovering an underlying structure that equivariantly reflects the factorized variations shown in data.
A simple model named Commutative Lie Group VAE is introduced to realize the group-based disentanglement learning.
Experiments show that our model can effectively learn disentangled representations without supervision, and can achieve state-of-the-art performance without extra constraints.
arXiv Detail & Related papers (2021-06-07T07:03:14Z) - GroupifyVAE: from Group-based Definition to VAE-based Unsupervised
Representation Disentanglement [91.9003001845855]
VAE-based unsupervised disentanglement can not be achieved without introducing other inductive bias.
We address VAE-based unsupervised disentanglement by leveraging the constraints derived from the Group Theory based definition as the non-probabilistic inductive bias.
We train 1800 models covering the most prominent VAE-based models on five datasets to verify the effectiveness of our method.
arXiv Detail & Related papers (2021-02-20T09:49:51Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Applying Lie Groups Approaches for Rigid Registration of Point Clouds [3.308743964406687]
We use Lie groups and Lie algebras to find the rigid transformation that best register two surfaces represented by point clouds.
The so called pairwise rigid registration can be formulated by comparing intrinsic second-order orientation tensors.
We show promising results when embedding orientation tensor fields in Lie algebras.
arXiv Detail & Related papers (2020-06-23T21:26:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.