Color Equivariant Convolutional Networks
- URL: http://arxiv.org/abs/2310.19368v1
- Date: Mon, 30 Oct 2023 09:18:49 GMT
- Title: Color Equivariant Convolutional Networks
- Authors: Attila Lengyel, Ombretta Strafforello, Robert-Jan Bruintjes, Alexander
Gielisse, Jan van Gemert
- Abstract summary: CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
- Score: 50.655443383582124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Color is a crucial visual cue readily exploited by Convolutional Neural
Networks (CNNs) for object recognition. However, CNNs struggle if there is data
imbalance between color variations introduced by accidental recording
conditions. Color invariance addresses this issue but does so at the cost of
removing all color information, which sacrifices discriminative power. In this
paper, we propose Color Equivariant Convolutions (CEConvs), a novel deep
learning building block that enables shape feature sharing across the color
spectrum while retaining important color information. We extend the notion of
equivariance from geometric to photometric transformations by incorporating
parameter sharing over hue-shifts in a neural network. We demonstrate the
benefits of CEConvs in terms of downstream performance to various tasks and
improved robustness to color changes, including train-test distribution shifts.
Our approach can be seamlessly integrated into existing architectures, such as
ResNets, and offers a promising solution for addressing color-based domain
shifts in CNNs.
Related papers
- Learning Color Equivariant Representations [1.9594704501292781]
We introduce group convolutional neural networks (GCNNs) equivariant to color variation.
GCNNs have been designed for a variety of geometric transformations from 2D and 3D rotation groups, to semi-groups such as scale.
arXiv Detail & Related papers (2024-06-13T21:02:03Z) - Revisiting Data Augmentation for Rotational Invariance in Convolutional
Neural Networks [0.29127054707887967]
We investigate how best to include rotational invariance in a CNN for image classification.
Our experiments show that networks trained with data augmentation alone can classify rotated images nearly as well as in the normal unrotated case.
arXiv Detail & Related papers (2023-10-12T15:53:24Z) - Point-aware Interaction and CNN-induced Refinement Network for RGB-D
Salient Object Detection [95.84616822805664]
We introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement.
In order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation.
arXiv Detail & Related papers (2023-08-17T11:57:49Z) - On the ability of CNNs to extract color invariant intensity based
features for image classification [4.297070083645049]
Convolutional neural networks (CNNs) have demonstrated remarkable success in vision-related tasks.
Recent studies suggest that CNNs exhibit a bias toward texture instead of object shape in image classification tasks.
This paper investigates the ability of CNNs to adapt to different color distributions in an image while maintaining context and background.
arXiv Detail & Related papers (2023-07-13T00:36:55Z) - Impact of Colour Variation on Robustness of Deep Neural Networks [0.0]
Deep neural networks (DNNs) have shown state-of-the-art performance for computer vision applications like image classification, segmentation and object detection.
Recent advances have shown their vulnerability to manual digital perturbations in the input data, namely adversarial attacks.
In this work, we propose a color-variation dataset by distorting their RGB color on a subset of the ImageNet with 27 different combinations.
arXiv Detail & Related papers (2022-09-02T08:16:04Z) - Learning to Structure an Image with Few Colors and Beyond [59.34619548026885]
We propose a color quantization network, ColorCNN, which learns to structure an image in limited color spaces by minimizing the classification loss.
We introduce ColorCNN+, which supports multiple color space size configurations, and addresses the previous issues of poor recognition accuracy and undesirable visual fidelity under large color spaces.
For potential applications, we show that ColorCNNs can be used as image compression methods for network recognition.
arXiv Detail & Related papers (2022-08-17T17:59:15Z) - Neural Color Operators for Sequential Image Retouching [62.99812889713773]
We propose a novel image retouching method by modeling the retouching process as performing a sequence of newly introduced trainable neural color operators.
The neural color operator mimics the behavior of traditional color operators and learns pixelwise color transformation while its strength is controlled by a scalar.
Our method consistently achieves the best results compared with SOTA methods in both quantitative measures and visual qualities.
arXiv Detail & Related papers (2022-07-17T05:33:19Z) - Assessing The Importance Of Colours For CNNs In Object Recognition [70.70151719764021]
Convolutional neural networks (CNNs) have been shown to exhibit conflicting properties.
We demonstrate that CNNs often rely heavily on colour information while making a prediction.
We evaluate a model trained with congruent images on congruent, greyscale, and incongruent images.
arXiv Detail & Related papers (2020-12-12T22:55:06Z) - What Does CNN Shift Invariance Look Like? A Visualization Study [87.79405274610681]
Feature extraction with convolutional neural networks (CNNs) is a popular method to represent images for machine learning tasks.
We focus on measuring and visualizing the shift invariance of extracted features from popular off-the-shelf CNN models.
We conclude that features extracted from popular networks are not globally invariant, and that biases and artifacts exist within this variance.
arXiv Detail & Related papers (2020-11-09T01:16:30Z) - How Convolutional Neural Network Architecture Biases Learned Opponency
and Colour Tuning [1.0742675209112622]
Recent work suggests that changing Convolutional Neural Network (CNN) architecture by introducing a bottleneck in the second layer can yield changes in learned function.
To understand this relationship fully requires a way of quantitatively comparing trained networks.
We propose an approach to obtaining spatial and colour tuning curves for convolutional neurons.
arXiv Detail & Related papers (2020-10-06T11:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.