A New Neural Network Architecture Invariant to the Action of Symmetry
Subgroups
- URL: http://arxiv.org/abs/2012.06452v1
- Date: Fri, 11 Dec 2020 16:19:46 GMT
- Title: A New Neural Network Architecture Invariant to the Action of Symmetry
Subgroups
- Authors: Piotr Kicki, Mete Ozay, Piotr Skrzypczy\'nski
- Abstract summary: We propose a $G$-invariant neural network that approximates functions invariant to the action of a given permutation subgroup on input data.
The key element of the proposed network architecture is a new $G$-invariant transformation module, which produces a $G$-invariant latent representation of the input data.
- Score: 11.812645659940237
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a computationally efficient $G$-invariant neural network that
approximates functions invariant to the action of a given permutation subgroup
$G \leq S_n$ of the symmetric group on input data. The key element of the
proposed network architecture is a new $G$-invariant transformation module,
which produces a $G$-invariant latent representation of the input data.
Theoretical considerations are supported by numerical experiments, which
demonstrate the effectiveness and strong generalization properties of the
proposed method in comparison to other $G$-invariant neural networks.
Related papers
- Joint Group Invariant Functions on Data-Parameter Domain Induce
Universal Neural Networks [14.45619075342763]
We present a systematic method to induce a generalized neural network and its right inverse operator, called the ridgelet transform.
Since the ridgelet transform is an inverse, it can describe the arrangement of parameters for the network to represent a target function.
We present a new simple proof of the universality by using Schur's lemma in a unified manner covering a wide class of networks.
arXiv Detail & Related papers (2023-10-05T13:30:37Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Implicit Convolutional Kernels for Steerable CNNs [5.141137421503899]
Steerable convolutional neural networks (CNNs) provide a general framework for building neural networks equivariant to translations and transformations of an origin-preserving group $G$.
We propose using implicit neural representation via multi-layer perceptrons (MLPs) to parameterize $G$-steerable kernels.
We prove the effectiveness of our method on multiple tasks, including N-body simulations, point cloud classification and molecular property prediction.
arXiv Detail & Related papers (2022-12-12T18:10:33Z) - Equivariant Transduction through Invariant Alignment [71.45263447328374]
We introduce a novel group-equivariant architecture that incorporates a group-in hard alignment mechanism.
We find that our network's structure allows it to develop stronger equivariant properties than existing group-equivariant approaches.
We additionally find that it outperforms previous group-equivariant networks empirically on the SCAN task.
arXiv Detail & Related papers (2022-09-22T11:19:45Z) - Universality of group convolutional neural networks based on ridgelet
analysis on groups [10.05944106581306]
We investigate the approximation property of group convolutional neural networks (GCNNs) based on the ridgelet theory.
We formulate a versatile GCNN as a nonlinear mapping between group representations.
arXiv Detail & Related papers (2022-05-30T02:52:22Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - On the finite representation of group equivariant operators via
permutant measures [0.0]
We show that each linear $G$-equivariant operator can be produced by a suitable permutant measure.
This result makes available a new method to build linear $G$-equivariant operators in the finite setting.
arXiv Detail & Related papers (2020-08-07T14:25:04Z) - Stochastic Flows and Geometric Optimization on the Orthogonal Group [52.50121190744979]
We present a new class of geometrically-driven optimization algorithms on the orthogonal group $O(d)$.
We show that our methods can be applied in various fields of machine learning including deep, convolutional and recurrent neural networks, reinforcement learning, flows and metric learning.
arXiv Detail & Related papers (2020-03-30T15:37:50Z) - A Computationally Efficient Neural Network Invariant to the Action of
Symmetry Subgroups [12.654871396334668]
A new $G$-invariant transformation module produces a $G$-invariant latent representation of the input data.
This latent representation is then processed with a multi-layer perceptron in the network.
We prove the universality of the proposed architecture, discuss its properties and highlight its computational and memory efficiency.
arXiv Detail & Related papers (2020-02-18T12:50:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.