A Computationally Efficient Neural Network Invariant to the Action of
Symmetry Subgroups
- URL: http://arxiv.org/abs/2002.07528v1
- Date: Tue, 18 Feb 2020 12:50:56 GMT
- Title: A Computationally Efficient Neural Network Invariant to the Action of
Symmetry Subgroups
- Authors: Piotr Kicki, Mete Ozay and Piotr Skrzypczy\'nski
- Abstract summary: A new $G$-invariant transformation module produces a $G$-invariant latent representation of the input data.
This latent representation is then processed with a multi-layer perceptron in the network.
We prove the universality of the proposed architecture, discuss its properties and highlight its computational and memory efficiency.
- Score: 12.654871396334668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a method to design a computationally efficient $G$-invariant
neural network that approximates functions invariant to the action of a given
permutation subgroup $G \leq S_n$ of the symmetric group on input data. The key
element of the proposed network architecture is a new $G$-invariant
transformation module, which produces a $G$-invariant latent representation of
the input data. This latent representation is then processed with a multi-layer
perceptron in the network. We prove the universality of the proposed
architecture, discuss its properties and highlight its computational and memory
efficiency. Theoretical considerations are supported by numerical experiments
involving different network configurations, which demonstrate the effectiveness
and strong generalization properties of the proposed method in comparison to
other $G$-invariant neural networks.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Joint Group Invariant Functions on Data-Parameter Domain Induce
Universal Neural Networks [14.45619075342763]
We present a systematic method to induce a generalized neural network and its right inverse operator, called the ridgelet transform.
Since the ridgelet transform is an inverse, it can describe the arrangement of parameters for the network to represent a target function.
We present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks.
arXiv Detail & Related papers (2023-10-05T13:30:37Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Equivariant Transduction through Invariant Alignment [71.45263447328374]
We introduce a novel group-equivariant architecture that incorporates a group-in hard alignment mechanism.
We find that our network's structure allows it to develop stronger equivariant properties than existing group-equivariant approaches.
We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task.
arXiv Detail & Related papers (2022-09-22T11:19:45Z) - Bispectral Neural Networks [1.0323063834827415]
We present a neural network architecture, Bispectral Neural Networks (BNNs)
BNNs are able to simultaneously learn groups, their irreducible representations, and corresponding equivariant and complete-invariant maps.
arXiv Detail & Related papers (2022-09-07T18:34:48Z) - Universality of group convolutional neural networks based on ridgelet
analysis on groups [10.05944106581306]
We investigate the approximation property of group convolutional neural networks (GCNNs) based on the ridgelet theory.
We formulate a versatile GCNN as a nonlinear mapping between group representations.
arXiv Detail & Related papers (2022-05-30T02:52:22Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - A New Neural Network Architecture Invariant to the Action of Symmetry
Subgroups [11.812645659940237]
We propose a $G$-invariant neural network that approximates functions invariant to the action of a given permutation subgroup on input data.
The key element of the proposed network architecture is a new $G$-invariant transformation module, which produces a $G$-invariant latent representation of the input data.
arXiv Detail & Related papers (2020-12-11T16:19:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.