Exploiting Learned Symmetries in Group Equivariant Convolutions
- URL: http://arxiv.org/abs/2106.04914v1
- Date: Wed, 9 Jun 2021 08:50:22 GMT
- Title: Exploiting Learned Symmetries in Group Equivariant Convolutions
- Authors: Attila Lengyel, Jan C. van Gemert
- Abstract summary: Group Equivariant Convolutions (GConvs) enable convolutional neural networks to be equivariant to various transformation groups.
We show that GConvs can be efficiently decomposed into depthwise separable convolutions.
- Score: 20.63056707649319
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Group Equivariant Convolutions (GConvs) enable convolutional neural networks
to be equivariant to various transformation groups, but at an additional
parameter and compute cost. We investigate the filter parameters learned by
GConvs and find certain conditions under which they become highly redundant. We
show that GConvs can be efficiently decomposed into depthwise separable
convolutions while preserving equivariance properties and demonstrate improved
performance and data efficiency on two datasets. All code is publicly available
at github.com/Attila94/SepGrouPy.
Related papers
- LDConv: Linear deformable convolution for improving convolutional neural networks [18.814748446649627]
Linear Deformable Convolution (LDConv) is a plug-and-play convolutional operation that can replace the convolutional operation to improve network performance.
LDConv corrects the growth trend of the number of parameters for standard convolution and Deformable Conv to a linear growth.
arXiv Detail & Related papers (2023-11-20T07:54:54Z) - Accelerated Discovery of Machine-Learned Symmetries: Deriving the
Exceptional Lie Groups G2, F4 and E6 [55.41644538483948]
This letter introduces two improved algorithms that significantly speed up the discovery of symmetry transformations.
Given the significant complexity of the exceptional Lie groups, our results demonstrate that this machine-learning method for discovering symmetries is completely general and can be applied to a wide variety of labeled datasets.
arXiv Detail & Related papers (2023-07-10T20:25:44Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Improving the Sample-Complexity of Deep Classification Networks with
Invariant Integration [77.99182201815763]
Leveraging prior knowledge on intraclass variance due to transformations is a powerful method to improve the sample complexity of deep neural networks.
We propose a novel monomial selection algorithm based on pruning methods to allow an application to more complex problems.
We demonstrate the improved sample complexity on the Rotated-MNIST, SVHN and CIFAR-10 datasets.
arXiv Detail & Related papers (2022-02-08T16:16:11Z) - OneDConv: Generalized Convolution For Transform-Invariant Representation [76.15687106423859]
We propose a novel generalized one dimension convolutional operator (OneDConv)
It dynamically transforms the convolution kernels based on the input features in a computationally and parametrically efficient manner.
It improves the robustness and generalization of convolution without sacrificing the performance on common images.
arXiv Detail & Related papers (2022-01-15T07:44:44Z) - Exploiting Redundancy: Separable Group Convolutional Networks on Lie
Groups [14.029933823101084]
Group convolutional neural networks (G-CNNs) have been shown to increase parameter efficiency and model accuracy.
In this work, we investigate the properties of representations learned by regular G-CNNs, and show considerable parameter redundancy in group convolution kernels.
We introduce convolution kernels that are separable over the subgroup and channel dimensions.
arXiv Detail & Related papers (2021-10-25T15:56:53Z) - Group Equivariant Subsampling [60.53371517247382]
Subsampling is used in convolutional neural networks (CNNs) in the form of pooling or strided convolutions.
We first introduce translation equivariant subsampling/upsampling layers that can be used to construct exact translation equivariant CNNs.
We then generalise these layers beyond translations to general groups, thus proposing group equivariant subsampling/upsampling.
arXiv Detail & Related papers (2021-06-10T16:14:00Z) - Symmetry-driven graph neural networks [1.713291434132985]
We introduce two graph network architectures that are equivariant to several types of transformations affecting the node coordinates.
We demonstrate these capabilities on a synthetic dataset composed of $n$-dimensional geometric objects.
arXiv Detail & Related papers (2021-05-28T18:54:12Z) - Group Equivariant Conditional Neural Processes [30.134634059773703]
We present the group equivariant conditional neural process (EquivCNP)
We show that EquivCNP achieves comparable performance to conventional conditional neural processes in a 1D regression task.
arXiv Detail & Related papers (2021-02-17T13:50:07Z) - PDO-eConvs: Partial Differential Operator Based Equivariant Convolutions [71.60219086238254]
We deal with the issue from the connection between convolutions and partial differential operators (PDOs)
In implementation, we discretize the system using the numerical schemes of PDOs, deriving approximately equivariant convolutions (PDO-eConvs)
Experiments on rotated MNIST and natural image classification show that PDO-eConvs perform competitively yet use parameters much more efficiently.
arXiv Detail & Related papers (2020-07-20T18:57:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.