Offset equivariant networks and their applications
- URL: http://arxiv.org/abs/2207.00292v1
- Date: Fri, 1 Jul 2022 09:38:19 GMT
- Title: Offset equivariant networks and their applications
- Authors: Marco Cotogni, Claudio Cusano
- Abstract summary: equivariant networks are neural networks that preserve in their output uniform increments in the input.
We present a framework for the design and implementation of offset equivariant networks.
Our experiments show that the performance of offset equivariant networks are comparable to those in the state of the art on regular data.
- Score: 5.617903764268157
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: In this paper we present a framework for the design and implementation of
offset equivariant networks, that is, neural networks that preserve in their
output uniform increments in the input. In a suitable color space this kind of
networks achieves equivariance with respect to the photometric transformations
that characterize changes in the lighting conditions. We verified the framework
on three different problems: image recognition, illuminant estimation, and
image inpainting. Our experiments show that the performance of offset
equivariant networks are comparable to those in the state of the art on regular
data. Differently from conventional networks, however, equivariant networks do
behave consistently well when the color of the illuminant changes.
Related papers
- Learning Color Equivariant Representations [1.9594704501292781]
We introduce group convolutional neural networks (GCNNs) equivariant to color variation.
GCNNs have been designed for a variety of geometric transformations from 2D and 3D rotation groups, to semi-groups such as scale.
arXiv Detail & Related papers (2024-06-13T21:02:03Z) - Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Using and Abusing Equivariance [10.70891251559827]
We show how Group Equivariant Convolutional Neural Networks use subsampling to learn to break equivariance to their symmetries.
We show that a change in the input dimension of a network as small as a single pixel can be enough for commonly used architectures to become approximately equivariant, rather than exactly.
arXiv Detail & Related papers (2023-08-22T09:49:26Z) - SO(2) and O(2) Equivariance in Image Recognition with
Bessel-Convolutional Neural Networks [63.24965775030674]
This work presents the development of Bessel-convolutional neural networks (B-CNNs)
B-CNNs exploit a particular decomposition based on Bessel functions to modify the key operation between images and filters.
Study is carried out to assess the performances of B-CNNs compared to other methods.
arXiv Detail & Related papers (2023-04-18T18:06:35Z) - Self-Supervised Learning for Group Equivariant Neural Networks [75.62232699377877]
Group equivariant neural networks are the models whose structure is restricted to commute with the transformations on the input.
We propose two concepts for self-supervised tasks: equivariant pretext labels and invariant contrastive loss.
Experiments on standard image recognition benchmarks demonstrate that the equivariant neural networks exploit the proposed self-supervised tasks.
arXiv Detail & Related papers (2023-03-08T08:11:26Z) - Neural Color Operators for Sequential Image Retouching [62.99812889713773]
We propose a novel image retouching method by modeling the retouching process as performing a sequence of newly introduced trainable neural color operators.
The neural color operator mimics the behavior of traditional color operators and learns pixelwise color transformation while its strength is controlled by a scalar.
Our method consistently achieves the best results compared with SOTA methods in both quantitative measures and visual qualities.
arXiv Detail & Related papers (2022-07-17T05:33:19Z) - Immiscible Color Flows in Optimal Transport Networks for Image
Classification [68.8204255655161]
We propose a physics-inspired system that adapts Optimal Transport principles to leverage color distributions of images.
Our dynamics regulates immiscible of colors traveling on a network built from images.
Our method outperforms competitor algorithms on image classification tasks in datasets where color information matters.
arXiv Detail & Related papers (2022-05-04T12:41:36Z) - Neural Networks with Divisive normalization for image segmentation with
application in cityscapes dataset [2.960890352853005]
We show that including divisive normalization in current deep networks makes them more invariant to non-informative changes in the images.
Experiments show that the inclusion of divisive normalization in the U-Net architecture leads to better segmentation results with respect to conventional U-Net.
arXiv Detail & Related papers (2022-03-25T10:26:39Z) - Equivariance versus Augmentation for Spherical Images [0.7388859384645262]
We analyze the role of rotational equivariance in convolutional neural networks (CNNs) applied to spherical images.
We compare the performance of the group equivariant networks known as S2CNNs and standard non-equivariant CNNs trained with an increasing amount of data augmentation.
arXiv Detail & Related papers (2022-02-08T16:49:30Z) - Encoding Robustness to Image Style via Adversarial Feature Perturbations [72.81911076841408]
We adapt adversarial training by directly perturbing feature statistics, rather than image pixels, to produce robust models.
Our proposed method, Adversarial Batch Normalization (AdvBN), is a single network layer that generates worst-case feature perturbations during training.
arXiv Detail & Related papers (2020-09-18T17:52:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.