Impact of Colour Variation on Robustness of Deep Neural Networks
- URL: http://arxiv.org/abs/2209.02832v2
- Date: Tue, 23 May 2023 15:43:28 GMT
- Title: Impact of Colour Variation on Robustness of Deep Neural Networks
- Authors: Chengyin Hu, Weiwen Shi
- Abstract summary: Deep neural networks (DNNs) have shown state-of-the-art performance for computer vision applications like image classification, segmentation and object detection.
Recent advances have shown their vulnerability to manual digital perturbations in the input data, namely adversarial attacks.
In this work, we propose a color-variation dataset by distorting their RGB color on a subset of the ImageNet with 27 different combinations.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep neural networks (DNNs) have have shown state-of-the-art performance for
computer vision applications like image classification, segmentation and object
detection. Whereas recent advances have shown their vulnerability to manual
digital perturbations in the input data, namely adversarial attacks. The
accuracy of the networks is significantly affected by the data distribution of
their training dataset. Distortions or perturbations on color space of input
images generates out-of-distribution data, which make networks more likely to
misclassify them. In this work, we propose a color-variation dataset by
distorting their RGB color on a subset of the ImageNet with 27 different
combinations. The aim of our work is to study the impact of color variation on
the performance of DNNs. We perform experiments on several state-of-the-art DNN
architectures on the proposed dataset, and the result shows a significant
correlation between color variation and loss of accuracy. Furthermore, based on
the ResNet50 architecture, we demonstrate some experiments of the performance
of recently proposed robust training techniques and strategies, such as Augmix,
revisit, and free normalizer, on our proposed dataset. Experimental results
indicate that these robust training techniques can improve the robustness of
deep networks to color variation.
Related papers
- Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Divergences in Color Perception between Deep Neural Networks and Humans [3.0315685825606633]
We develop experiments for evaluating the perceptual coherence of color embeddings in deep neural networks (DNNs)
We assess how well these algorithms predict human color similarity judgments collected via an online survey.
We compare DNN performance against an interpretable and cognitively plausible model of color perception based on wavelet decomposition.
arXiv Detail & Related papers (2023-09-11T20:26:40Z) - Point-aware Interaction and CNN-induced Refinement Network for RGB-D
Salient Object Detection [95.84616822805664]
We introduce CNNs-assisted Transformer architecture and propose a novel RGB-D SOD network with Point-aware Interaction and CNN-induced Refinement.
In order to alleviate the block effect and detail destruction problems brought by the Transformer naturally, we design a CNN-induced refinement (CNNR) unit for content refinement and supplementation.
arXiv Detail & Related papers (2023-08-17T11:57:49Z) - On the ability of CNNs to extract color invariant intensity based
features for image classification [4.297070083645049]
Convolutional neural networks (CNNs) have demonstrated remarkable success in vision-related tasks.
Recent studies suggest that CNNs exhibit a bias toward texture instead of object shape in image classification tasks.
This paper investigates the ability of CNNs to adapt to different color distributions in an image while maintaining context and background.
arXiv Detail & Related papers (2023-07-13T00:36:55Z) - Impact of Light and Shadow on Robustness of Deep Neural Networks [5.015796849425367]
Deep neural networks (DNNs) have made remarkable strides in various computer vision tasks, including image classification, segmentation, and object detection.
Recent research has revealed a vulnerability in advanced DNNs when faced with deliberate manipulations of input data, known as adversarial attacks.
We propose a brightness-variation dataset, which incorporates 24 distinct brightness levels for each image within a subset of ImageNet.
arXiv Detail & Related papers (2023-05-23T15:30:56Z) - Influencer Detection with Dynamic Graph Neural Networks [56.1837101824783]
We investigate different dynamic Graph Neural Networks (GNNs) configurations for influencer detection.
We show that using deep multi-head attention in GNN and encoding temporal attributes significantly improves performance.
arXiv Detail & Related papers (2022-11-15T13:00:25Z) - Impact of Scaled Image on Robustness of Deep Neural Networks [0.0]
Scaling the raw images creates out-of-distribution data, which makes it a possible adversarial attack to fool the networks.
In this work, we propose a Scaling-distortion dataset ImageNet-CS by Scaling a subset of the ImageNet Challenge dataset by different multiples.
arXiv Detail & Related papers (2022-09-02T08:06:58Z) - Stereoscopic Universal Perturbations across Different Architectures and
Datasets [60.021985610201156]
We study the effect of adversarial perturbations of images on deep stereo matching networks for the disparity estimation task.
We present a method to craft a single set of perturbations that, when added to any stereo image pair in a dataset, can fool a stereo network.
Our perturbations can increase D1-error (akin to fooling rate) of state-of-the-art stereo networks from 1% to as much as 87%.
arXiv Detail & Related papers (2021-12-12T02:11:31Z) - Smart Data Representations: Impact on the Accuracy of Deep Neural
Networks [0.2446672595462589]
We analyze the impact of data representations on the performance of Deep Neural Networks using energy time series forecasting.
The results show that, depending on the forecast horizon, the same data representations can have a positive or negative impact on the accuracy of Deep Neural Networks.
arXiv Detail & Related papers (2021-11-17T14:06:08Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - On Robustness and Transferability of Convolutional Neural Networks [147.71743081671508]
Modern deep convolutional networks (CNNs) are often criticized for not generalizing under distributional shifts.
We study the interplay between out-of-distribution and transfer performance of modern image classification CNNs for the first time.
We find that increasing both the training set and model sizes significantly improve the distributional shift robustness.
arXiv Detail & Related papers (2020-07-16T18:39:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.