Shape Defense Against Adversarial Attacks
- URL: http://arxiv.org/abs/2008.13336v3
- Date: Mon, 6 Dec 2021 20:12:15 GMT
- Title: Shape Defense Against Adversarial Attacks
- Authors: Ali Borji
- Abstract summary: Humans rely heavily on shape information to recognize objects. Conversely, convolutional neural networks (CNNs) are biased more towards texture.
Here, we explore how shape bias can be incorporated into CNNs to improve their robustness.
Two algorithms are proposed, based on the observation that edges are invariant to moderate imperceptible perturbations.
- Score: 47.64219291655723
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans rely heavily on shape information to recognize objects. Conversely,
convolutional neural networks (CNNs) are biased more towards texture. This is
perhaps the main reason why CNNs are vulnerable to adversarial examples. Here,
we explore how shape bias can be incorporated into CNNs to improve their
robustness. Two algorithms are proposed, based on the observation that edges
are invariant to moderate imperceptible perturbations. In the first one, a
classifier is adversarially trained on images with the edge map as an
additional channel. At inference time, the edge map is recomputed and
concatenated to the image. In the second algorithm, a conditional GAN is
trained to translate the edge maps, from clean and/or perturbed images, into
clean images. Inference is done over the generated image corresponding to the
input's edge map. Extensive experiments over 10 datasets demonstrate the
effectiveness of the proposed algorithms against FGSM and $\ell_\infty$ PGD-40
attacks. Further, we show that a) edge information can also benefit other
adversarial training methods, and b) CNNs trained on edge-augmented inputs are
more robust against natural image corruptions such as motion blur, impulse
noise and JPEG compression, than CNNs trained solely on RGB images. From a
broader perspective, our study suggests that CNNs do not adequately account for
image structures that are crucial for robustness. Code is available
at:~\url{https://github.com/aliborji/Shapedefense.git}.
Related papers
- Decoupled Mixup for Generalized Visual Recognition [71.13734761715472]
We propose a novel "Decoupled-Mixup" method to train CNN models for visual recognition.
Our method decouples each image into discriminative and noise-prone regions, and then heterogeneously combines these regions to train CNN models.
Experiment results show the high generalization performance of our method on testing data that are composed of unseen contexts.
arXiv Detail & Related papers (2022-10-26T15:21:39Z) - Early-exit deep neural networks for distorted images: providing an
efficient edge offloading [69.43216268165402]
Edge offloading for deep neural networks (DNNs) can be adaptive to the input's complexity.
We introduce expert side branches trained on a particular distortion type to improve against image distortion.
This approach increases the estimated accuracy on the edge, improving the offloading decisions.
arXiv Detail & Related papers (2021-08-20T19:52:55Z) - BreakingBED -- Breaking Binary and Efficient Deep Neural Networks by
Adversarial Attacks [65.2021953284622]
We study robustness of CNNs against white-box and black-box adversarial attacks.
Results are shown for distilled CNNs, agent-based state-of-the-art pruned models, and binarized neural networks.
arXiv Detail & Related papers (2021-03-14T20:43:19Z) - Benford's law: what does it say on adversarial images? [0.0]
We investigate statistical differences between natural images and adversarial ones.
We show that employing a proper image transformation and for a class of adversarial attacks, the distribution of the leading digit of the pixels in adversarial images deviates from Benford's law.
arXiv Detail & Related papers (2021-02-09T02:50:29Z) - Color Channel Perturbation Attacks for Fooling Convolutional Neural
Networks and A Defense Against Such Attacks [16.431689066281265]
The Conalvolutional Neural Networks (CNNs) have emerged as a powerful data dependent hierarchical feature extraction method.
It is observed that the network overfits the training samples very easily.
We propose a Color Channel Perturbation (CCP) attack to fool the CNNs.
arXiv Detail & Related papers (2020-12-20T11:35:29Z) - GreedyFool: Distortion-Aware Sparse Adversarial Attack [138.55076781355206]
Modern deep neural networks (DNNs) are vulnerable to adversarial samples.
Sparse adversarial samples can fool the target model by only perturbing a few pixels.
We propose a novel two-stage distortion-aware greedy-based method dubbed as "GreedyFool"
arXiv Detail & Related papers (2020-10-26T17:59:07Z) - Increasing the Robustness of Semantic Segmentation Models with
Painting-by-Numbers [39.95214171175713]
We build upon an insight from image classification that output can be improved by increasing the network-bias towards object shapes.
Our basic idea is to alpha-blend a portion of the RGB training images with faked images, where each class-label is given a fixed, randomly chosen color.
We demonstrate the effectiveness of our training schema for DeepLabv3+ with various network backbones, MobileNet-V2, ResNets, and Xception, and evaluate it on the Cityscapes dataset.
arXiv Detail & Related papers (2020-10-12T07:42:39Z) - Homography Estimation with Convolutional Neural Networks Under
Conditions of Variance [0.0]
We analyze the performance of two recently published methods using Convolutional Neural Networks (CNNs)
CNNs can be trained to be more robust against noise, but at a small cost to accuracy in the noiseless case.
We show that training a CNN to a specific magnitude of noise leads to a "Goldilocks Zone" with regard to the noise levels where that CNN performs best.
arXiv Detail & Related papers (2020-10-02T15:11:25Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.