SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks
- URL: http://arxiv.org/abs/2304.09214v1
- Date: Tue, 18 Apr 2023 18:06:35 GMT
- Title: SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks
- Authors: Valentin Delchevalerie, Alexandre Mayer, Adrien Bibal and Beno\^it
Fr\'enay
- Abstract summary: This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
- Score: 63.24965775030674
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For many years, it has been shown how much exploiting equivariances can be
beneficial when solving image analysis tasks. For example, the superiority of
convolutional neural networks (CNNs) compared to dense networks mainly comes
from an elegant exploitation of the translation equivariance. Patterns can
appear at arbitrary positions and convolutions take this into account to
achieve translation invariant operations through weight sharing. Nevertheless,
images often involve other symmetries that can also be exploited. It is the
case of rotations and reflections that have drawn particular attention and led
to the development of multiple equivariant CNN architectures. Among all these
methods, Bessel-convolutional neural networks (B-CNNs) exploit a particular
decomposition based on Bessel functions to modify the key operation between
images and filters and make it by design equivariant to all the continuous set
of planar rotations. In this work, the mathematical developments of B-CNNs are
presented along with several improvements, including the incorporation of
reflection and multi-scale equivariances. Extensive study is carried out to
assess the performances of B-CNNs compared to other methods. Finally, we
emphasize the theoretical advantages of B-CNNs by giving more insights and
in-depth mathematical details.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - The role of data embedding in equivariant quantum convolutional neural
networks [2.255961793913651]
We investigate the role of classical-to-quantum embedding on the performance of equivariant quantum neural networks (EQNNs)
We numerically compare the classification accuracy of EQCNNs with three different basis-permuted amplitude embeddings to the one obtained from a non-equivariant quantum convolutional neural network (QCNN)
arXiv Detail & Related papers (2023-12-20T18:25:15Z) - Affine-Transformation-Invariant Image Classification by Differentiable
Arithmetic Distribution Module [8.125023712173686]
Convolutional Neural Networks (CNNs) have achieved promising results in image classification.
CNNs are vulnerable to affine transformations including rotation, translation, flip and shuffle.
In this work, we introduce a more robust substitute by incorporating distribution learning techniques.
arXiv Detail & Related papers (2023-09-01T22:31:32Z) - Deep Neural Networks with Efficient Guaranteed Invariances [77.99182201815763]
We address the problem of improving the performance and in particular the sample complexity of deep neural networks.
Group-equivariant convolutions are a popular approach to obtain equivariant representations.
We propose a multi-stream architecture, where each stream is invariant to a different transformation.
arXiv Detail & Related papers (2023-03-02T20:44:45Z) - Permutation Equivariant Neural Functionals [92.0667671999604]
This work studies the design of neural networks that can process the weights or gradients of other neural networks.
We focus on the permutation symmetries that arise in the weights of deep feedforward networks because hidden layer neurons have no inherent order.
In our experiments, we find that permutation equivariant neural functionals are effective on a diverse set of tasks.
arXiv Detail & Related papers (2023-02-27T18:52:38Z) - Basic Binary Convolution Unit for Binarized Image Restoration Network [146.0988597062618]
In this study, we reconsider components in binary convolution, such as residual connection, BatchNorm, activation function, and structure, for image restoration tasks.
Based on our findings and analyses, we design a simple yet efficient basic binary convolution unit (BBCU)
Our BBCU significantly outperforms other BNNs and lightweight models, which shows that BBCU can serve as a basic unit for binarized IR networks.
arXiv Detail & Related papers (2022-10-02T01:54:40Z) - Imaging with Equivariant Deep Learning [9.333799633608345]
We review the emerging field of equivariant imaging and show how it can provide improved generalization and new imaging opportunities.
We show the interplay between the acquisition physics and group actions and links to iterative reconstruction, blind compressed sensing and self-supervised learning.
arXiv Detail & Related papers (2022-09-05T02:13:57Z) - Equivariance versus Augmentation for Spherical Images [0.7388859384645262]
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images.
We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation.
arXiv Detail & Related papers (2022-02-08T16:49:30Z) - Implicit Equivariance in Convolutional Networks [1.911678487931003]
Implicitly Equivariant Networks (IEN) induce equivariant in the different layers of a standard CNN model.
We show IEN outperforms the state-of-the-art rotation equivariant tracking method while providing faster inference speed.
arXiv Detail & Related papers (2021-11-28T14:44:17Z) - Frame Averaging for Invariant and Equivariant Network Design [50.87023773850824]
We introduce Frame Averaging (FA), a framework for adapting known (backbone) architectures to become invariant or equivariant to new symmetry types.
We show that FA-based models have maximal expressive power in a broad setting.
We propose a new class of universal Graph Neural Networks (GNNs), universal Euclidean motion invariant point cloud networks, and Euclidean motion invariant Message Passing (MP) GNNs.
arXiv Detail & Related papers (2021-10-07T11:05:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.