Neural Group Actions
- URL: http://arxiv.org/abs/2010.03733v1
- Date: Thu, 8 Oct 2020 02:27:05 GMT
- Title: Neural Group Actions
- Authors: Span Spanbauer, Luke Sciarappa
- Abstract summary: We introduce an algorithm for designing Neural Group Actions, collections of deep neural network architectures which model symmetric transformations satisfying the laws of a given finite group.
We demonstrate experimentally that a Neural Group Action for the quaternion group $Q_8$ can learn how a set of nonuniversal quantum gates satisfying the $Q_8$ group laws act on single qubit quantum states.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce an algorithm for designing Neural Group Actions, collections of
deep neural network architectures which model symmetric transformations
satisfying the laws of a given finite group. This generalizes involutive neural
networks $\mathcal{N}$, which satisfy $\mathcal{N}(\mathcal{N}(x))=x$ for any
data $x$, the group law of $\mathbb{Z}_2$. We show how to optionally enforce an
additional constraint that the group action be volume-preserving. We
conjecture, by analogy to a universality result for involutive neural networks,
that generative models built from Neural Group Actions are universal
approximators for collections of probabilistic transitions adhering to the
group laws. We demonstrate experimentally that a Neural Group Action for the
quaternion group $Q_8$ can learn how a set of nonuniversal quantum gates
satisfying the $Q_8$ group laws act on single qubit quantum states.
Related papers
- Grokking Group Multiplication with Cosets [10.255744802963926]
Algorithmic tasks have proven to be a fruitful test ground for interpreting a neural network end-to-end.
We completely reverse engineer fully connected one-hidden layer networks that have grokked'' the arithmetic of the permutation groups $S_5$ and $S_6$.
We relate how we reverse engineered the model's mechanisms and confirm our theory was a faithful description of the circuit's functionality.
arXiv Detail & Related papers (2023-12-11T18:12:18Z) - Deep Learning Symmetries and Their Lie Groups, Algebras, and Subalgebras
from First Principles [55.41644538483948]
We design a deep-learning algorithm for the discovery and identification of the continuous group of symmetries present in a labeled dataset.
We use fully connected neural networks to model the transformations symmetry and the corresponding generators.
Our study also opens the door for using a machine learning approach in the mathematical study of Lie groups and their properties.
arXiv Detail & Related papers (2023-01-13T16:25:25Z) - VC dimensions of group convolutional neural networks [0.0]
We study the generalization capacity of group convolutional neural networks.
We identify precise estimates for the VC dimensions of simple sets of group convolutional neural networks.
arXiv Detail & Related papers (2022-12-19T14:43:22Z) - Brauer's Group Equivariant Neural Networks [0.0]
We provide a full characterisation of all of the possible group equivariant neural networks whose layers are some tensor power of $mathbbRn$.
We find a spanning set of matrices for the learnable, linear, equivariant layer functions between such tensor power spaces.
arXiv Detail & Related papers (2022-12-16T18:08:51Z) - Implicit Convolutional Kernels for Steerable CNNs [5.141137421503899]
Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and transformations of an origin-preserving group $G$.
We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels.
We prove the effectiveness of our method on multiple tasks, including N-body simulations, point cloud classification and molecular property prediction.
arXiv Detail & Related papers (2022-12-12T18:10:33Z) - Universality of group convolutional neural networks based on ridgelet
analysis on groups [10.05944106581306]
We investigate the approximation property of group convolutional neural networks (GCNNs) based on the ridgelet theory.
We formulate a versatile GCNN as a nonlinear mapping between group representations.
arXiv Detail & Related papers (2022-05-30T02:52:22Z) - The Separation Capacity of Random Neural Networks [78.25060223808936]
We show that a sufficiently large two-layer ReLU-network with standard Gaussian weights and uniformly distributed biases can solve this problem with high probability.
We quantify the relevant structure of the data in terms of a novel notion of mutual complexity.
arXiv Detail & Related papers (2021-07-31T10:25:26Z) - A Practical Method for Constructing Equivariant Multilayer Perceptrons
for Arbitrary Matrix Groups [115.58550697886987]
We provide a completely general algorithm for solving for the equivariant layers of matrix groups.
In addition to recovering solutions from other works as special cases, we construct multilayer perceptrons equivariant to multiple groups that have never been tackled before.
Our approach outperforms non-equivariant baselines, with applications to particle physics and dynamical systems.
arXiv Detail & Related papers (2021-04-19T17:21:54Z) - Abelian Neural Networks [48.52497085313911]
We first construct a neural network architecture for Abelian group operations and derive a universal approximation property.
We extend it to Abelian semigroup operations using the characterization of associative symmetrics.
We train our models over fixed word embeddings and demonstrate improved performance over the original word2vec.
arXiv Detail & Related papers (2021-02-24T11:52:21Z) - LieTransformer: Equivariant self-attention for Lie Groups [49.9625160479096]
Group equivariant neural networks are used as building blocks of group invariant neural networks.
We extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models.
We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups.
arXiv Detail & Related papers (2020-12-20T11:02:49Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.