How Jellyfish Characterise Alternating Group Equivariant Neural Networks
- URL: http://arxiv.org/abs/2301.10152v2
- Date: Sun, 18 Jun 2023 10:10:23 GMT
- Title: How Jellyfish Characterise Alternating Group Equivariant Neural Networks
- Authors: Edward Pearce-Crump
- Abstract summary: We find a basis for the learnable, linear, $A_n$-equivariant layer functions between such tensor power spaces in the standard basis of $mathbbRn$.
We also describe how our approach generalises to the construction of neural networks that are equivariant to local symmetries.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We provide a full characterisation of all of the possible alternating group
($A_n$) equivariant neural networks whose layers are some tensor power of
$\mathbb{R}^{n}$. In particular, we find a basis of matrices for the learnable,
linear, $A_n$-equivariant layer functions between such tensor power spaces in
the standard basis of $\mathbb{R}^{n}$. We also describe how our approach
generalises to the construction of neural networks that are equivariant to
local symmetries.
Related papers
- Categorification of Group Equivariant Neural Networks [0.0]
We show how category theory can be used to understand and work with the linear layer functions of group equivariant neural networks.
By using category theoretic constructions, we build a richer structure that is not seen in the original formulation of these neural networks.
arXiv Detail & Related papers (2023-04-27T12:39:28Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Connecting Permutation Equivariant Neural Networks and Partition Diagrams [0.0]
We show that all of the weight matrices that appear in Permutation equivariant neural networks can be obtained from Schur-Weyl duality.
In particular, we adapt Schur-Weyl duality to derive a simple, diagrammatic method for calculating the weight matrices themselves.
arXiv Detail & Related papers (2022-12-16T18:48:54Z) - Brauer's Group Equivariant Neural Networks [0.0]
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $mathbbRn$.
We find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces.
arXiv Detail & Related papers (2022-12-16T18:08:51Z) - Equivalence Between SE(3) Equivariant Networks via Steerable Kernels and
Group Convolution [90.67482899242093]
A wide range of techniques have been proposed in recent years for designing neural networks for 3D data that are equivariant under rotation and translation of the input.
We provide an in-depth analysis of both methods and their equivalence and relate the two constructions to multiview convolutional networks.
We also derive new TFN non-linearities from our equivalence principle and test them on practical benchmark datasets.
arXiv Detail & Related papers (2022-11-29T03:42:11Z) - Geometric Deep Learning and Equivariant Neural Networks [0.9381376621526817]
We survey the mathematical foundations of geometric deep learning, focusing on group equivariant and gauge equivariant neural networks.
We develop gauge equivariant convolutional neural networks on arbitrary manifold $mathcalM$ using principal bundles with structure group $K$ and equivariant maps between sections of associated vector bundles.
We analyze several applications of this formalism, including semantic segmentation and object detection networks.
arXiv Detail & Related papers (2021-05-28T15:41:52Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Stability of Algebraic Neural Networks to Small Perturbations [179.55535781816343]
Algebraic neural networks (AlgNNs) are composed of a cascade of layers each one associated to and algebraic signal model.
We show how any architecture that uses a formal notion of convolution can be stable beyond particular choices of the shift operator.
arXiv Detail & Related papers (2020-10-22T09:10:16Z) - Learning Over-Parametrized Two-Layer ReLU Neural Networks beyond NTK [58.5766737343951]
We consider the dynamic of descent for learning a two-layer neural network.
We show that an over-parametrized two-layer neural network can provably learn with gradient loss at most ground with Tangent samples.
arXiv Detail & Related papers (2020-07-09T07:09:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.