Learning Invariant Representations for Equivariant Neural Networks Using
Orthogonal Moments
- URL: http://arxiv.org/abs/2209.10944v1
- Date: Thu, 22 Sep 2022 11:48:39 GMT
- Title: Learning Invariant Representations for Equivariant Neural Networks Using
Orthogonal Moments
- Authors: Jaspreet Singh, Chandan Singh
- Abstract summary: The convolutional layers of standard convolutional neural networks (CNNs) are equivariant to translation.
Recently, a new class of CNNs is proposed in which the conventional layers of CNNs are replaced with equivariant convolution, pooling, and batch-normalization layers.
- Score: 9.680414207552722
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The convolutional layers of standard convolutional neural networks (CNNs) are
equivariant to translation. However, the convolution and fully-connected layers
are not equivariant or invariant to other affine geometric transformations.
Recently, a new class of CNNs is proposed in which the conventional layers of
CNNs are replaced with equivariant convolution, pooling, and
batch-normalization layers. The final classification layer in equivariant
neural networks is invariant to different affine geometric transformations such
as rotation, reflection and translation, and the scalar value is obtained by
either eliminating the spatial dimensions of filter responses using convolution
and down-sampling throughout the network or average is taken over the filter
responses. In this work, we propose to integrate the orthogonal moments which
gives the high-order statistics of the function as an effective means for
encoding global invariance with respect to rotation, reflection and translation
in fully-connected layers. As a result, the intermediate layers of the network
become equivariant while the classification layer becomes invariant. The most
widely used Zernike, pseudo-Zernike and orthogonal Fourier-Mellin moments are
considered for this purpose. The effectiveness of the proposed work is
evaluated by integrating the invariant transition and fully-connected layer in
the architecture of group-equivariant CNNs (G-CNNs) on rotated MNIST and
CIFAR10 datasets.
Related papers
- Restore Translation Using Equivariant Neural Networks [7.78895108256899]
In this paper, we propose a pre-classifier restorer to recover translated (or even rotated) inputs to a convolutional neural network.
The restorer is based on a theoretical result which gives a sufficient and necessary condition for an affine operator to be translational equivariant on a tensor space.
arXiv Detail & Related papers (2023-06-29T13:34:35Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Moving Frame Net: SE(3)-Equivariant Network for Volumes [0.0]
A rotation and translation equivariant neural network for image data was proposed based on the moving frames approach.
We significantly improve that approach by reducing the computation of moving frames to only one, at the input stage.
Our trained model overperforms the benchmarks in the medical volume classification of most of the tested datasets from MedMNIST3D.
arXiv Detail & Related papers (2022-11-07T10:25:38Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - Revisiting Transformation Invariant Geometric Deep Learning: Are Initial
Representations All You Need? [80.86819657126041]
We show that transformation-invariant and distance-preserving initial representations are sufficient to achieve transformation invariance.
Specifically, we realize transformation-invariant and distance-preserving initial point representations by modifying multi-dimensional scaling.
We prove that TinvNN can strictly guarantee transformation invariance, being general and flexible enough to be combined with the existing neural networks.
arXiv Detail & Related papers (2021-12-23T03:52:33Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - Generalizing Convolutional Neural Networks for Equivariance to Lie
Groups on Arbitrary Continuous Data [52.78581260260455]
We propose a general method to construct a convolutional layer that is equivariant to transformations from any specified Lie group.
We apply the same model architecture to images, ball-and-stick molecular data, and Hamiltonian dynamical systems.
arXiv Detail & Related papers (2020-02-25T17:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.