An Algorithm for Computing with Brauer's Group Equivariant Neural
Network Layers
- URL: http://arxiv.org/abs/2304.14165v1
- Date: Thu, 27 Apr 2023 13:06:07 GMT
- Title: An Algorithm for Computing with Brauer's Group Equivariant Neural
Network Layers
- Authors: Edward Pearce-Crump
- Abstract summary: We present an algorithm for multiplying a vector by any weight matrix for each of these groups, using category theoretic constructions to implement the procedure.
We show that our approach extends to the symmetric group, $S_n$, recovering the algorithm of arXiv:2303.06208 in the process.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The learnable, linear neural network layers between tensor power spaces of
$\mathbb{R}^{n}$ that are equivariant to the orthogonal group, $O(n)$, the
special orthogonal group, $SO(n)$, and the symplectic group, $Sp(n)$, were
characterised in arXiv:2212.08630. We present an algorithm for multiplying a
vector by any weight matrix for each of these groups, using category theoretic
constructions to implement the procedure. We achieve a significant reduction in
computational cost compared with a naive implementation by making use of
Kronecker product matrices to perform the multiplication. We show that our
approach extends to the symmetric group, $S_n$, recovering the algorithm of
arXiv:2303.06208 in the process.
Related papers
- Do you know what q-means? [50.045011844765185]
Clustering is one of the most important tools for analysis of large datasets.
We present an improved version of the "$q$-means" algorithm for clustering.
We also present a "dequantized" algorithm for $varepsilon which runs in $Obig(frack2varepsilon2(sqrtkd + log(Nd))big.
arXiv Detail & Related papers (2023-08-18T17:52:12Z) - Efficiently Learning One-Hidden-Layer ReLU Networks via Schur
Polynomials [50.90125395570797]
We study the problem of PAC learning a linear combination of $k$ ReLU activations under the standard Gaussian distribution on $mathbbRd$ with respect to the square loss.
Our main result is an efficient algorithm for this learning task with sample and computational complexity $(dk/epsilon)O(k)$, whereepsilon>0$ is the target accuracy.
arXiv Detail & Related papers (2023-07-24T14:37:22Z) - Discovering Sparse Representations of Lie Groups with Machine Learning [55.41644538483948]
We show that our method reproduces the canonical representations of the generators of the Lorentz group.
This approach is completely general and can be used to find the infinitesimal generators for any Lie group.
arXiv Detail & Related papers (2023-02-10T17:12:05Z) - How Jellyfish Characterise Alternating Group Equivariant Neural Networks [0.0]
We find a basis for the learnable, linear, $A_n$-equivariant layer functions between such tensor power spaces in the standard basis of $mathbbRn$.
We also describe how our approach generalises to the construction of neural networks that are equivariant to local symmetries.
arXiv Detail & Related papers (2023-01-24T17:39:10Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - Brauer's Group Equivariant Neural Networks [0.0]
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $mathbbRn$.
We find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces.
arXiv Detail & Related papers (2022-12-16T18:08:51Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Householder Dice: A Matrix-Free Algorithm for Simulating Dynamics on
Gaussian and Random Orthogonal Ensembles [12.005731086591139]
Householder Dice (HD) is an algorithm for simulating dynamics on dense random matrix ensembles with translation-invariant properties.
The memory and costs of the HD algorithm are $mathcalO(nT)$ and $mathcalO(nT2)$, respectively.
Numerical results demonstrate the promise of the HD algorithm as a new computational tool in the study of high-dimensional random systems.
arXiv Detail & Related papers (2021-01-19T04:50:53Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.