Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks
- URL: http://arxiv.org/abs/2012.14456v1
- Date: Sun, 20 Dec 2020 11:35:29 GMT
- Title: Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks
- Authors: Jayendra Kantipudi, Shiv Ram Dubey, Soumendu Chakraborty
- Abstract summary: The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
- Score: 16.431689066281265
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The Convolutional Neural Networks (CNNs) have emerged as a very powerful data
dependent hierarchical feature extraction method. It is widely used in several
computer vision problems. The CNNs learn the important visual features from
training samples automatically. It is observed that the network overfits the
training samples very easily. Several regularization methods have been proposed
to avoid the overfitting. In spite of this, the network is sensitive to the
color distribution within the images which is ignored by the existing
approaches. In this paper, we discover the color robustness problem of CNN by
proposing a Color Channel Perturbation (CCP) attack to fool the CNNs. In CCP
attack new images are generated with new channels created by combining the
original channels with the stochastic weights. Experiments were carried out
over widely used CIFAR10, Caltech256 and TinyImageNet datasets in the image
classification framework. The VGG, ResNet and DenseNet models are used to test
the impact of the proposed attack. It is observed that the performance of the
CNNs degrades drastically under the proposed CCP attack. Result show the effect
of the proposed simple CCP attack over the robustness of the CNN trained model.
The results are also compared with existing CNN fooling approaches to evaluate
the accuracy drop. We also propose a primary defense mechanism to this problem
by augmenting the training dataset with the proposed CCP attack. The
state-of-the-art performance using the proposed solution in terms of the CNN
robustness under CCP attack is observed in the experiments. The code is made
publicly available at
\url{https://github.com/jayendrakantipudi/Color-Channel-Perturbation-Attack}.
Related papers
- Color Equivariant Convolutional Networks [50.655443383582124]
CNNs struggle if there is data imbalance between color variations introduced by accidental recording conditions.
We propose Color Equivariant Convolutions ( CEConvs), a novel deep learning building block that enables shape feature sharing across the color spectrum.
We demonstrate the benefits of CEConvs in terms of downstream performance to various tasks and improved robustness to color changes, including train-test distribution shifts.
arXiv Detail & Related papers (2023-10-30T09:18:49Z) - Deeply Explain CNN via Hierarchical Decomposition [75.01251659472584]
In computer vision, some attribution methods for explaining CNNs attempt to study how the intermediate features affect the network prediction.
This paper introduces a hierarchical decomposition framework to explain CNN's decision-making process in a top-down manner.
arXiv Detail & Related papers (2022-01-23T07:56:04Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - The Mind's Eye: Visualizing Class-Agnostic Features of CNNs [92.39082696657874]
We propose an approach to visually interpret CNN features given a set of images by creating corresponding images that depict the most informative features of a specific layer.
Our method uses a dual-objective activation and distance loss, without requiring a generator network nor modifications to the original model.
arXiv Detail & Related papers (2021-01-29T07:46:39Z) - Exploiting Vulnerability of Pooling in Convolutional Neural Networks by
Strict Layer-Output Manipulation for Adversarial Attacks [7.540176446791261]
Convolutional neural networks (CNN) have been more and more applied in mobile robotics such as intelligent vehicles.
Security of CNNs in robotics applications is an important issue, for which potential adversarial attacks on CNNs are worth research.
In this paper, we conduct adversarial attacks on CNNs from the perspective of network structure by investigating and exploiting the vulnerability of pooling.
arXiv Detail & Related papers (2020-12-21T15:18:41Z) - Homography Estimation with Convolutional Neural Networks Under
Conditions of Variance [0.0]
We analyze the performance of two recently published methods using Convolutional Neural Networks (CNNs)
CNNs can be trained to be more robust against noise, but at a small cost to accuracy in the noiseless case.
We show that training a CNN to a specific magnitude of noise leads to a "Goldilocks Zone" with regard to the noise levels where that CNN performs best.
arXiv Detail & Related papers (2020-10-02T15:11:25Z) - Shape Defense Against Adversarial Attacks [47.64219291655723]
Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture.
Here, we explore how shape bias can be incorporated into CNNs to improve their robustness.
Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations.
arXiv Detail & Related papers (2020-08-31T03:23:59Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - A CNN With Multi-scale Convolution for Hyperspectral Image
Classification using Target-Pixel-Orientation scheme [2.094821665776961]
CNN is a popular choice to handle the hyperspectral image classification challenges.
In this paper, a novel target-patch-orientation method is proposed to train a CNN based network.
Also, we have introduced a hybrid of 3D-CNN and 2D-CNN based network architecture to implement band reduction and feature extraction methods.
arXiv Detail & Related papers (2020-01-30T07:45:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.