Blurring Fools the Network -- Adversarial Attacks by Feature Peak
Suppression and Gaussian Blurring
- URL: http://arxiv.org/abs/2012.11442v1
- Date: Mon, 21 Dec 2020 15:47:14 GMT
- Title: Blurring Fools the Network -- Adversarial Attacks by Feature Peak
Suppression and Gaussian Blurring
- Authors: Chenchen Zhao and Hao Li
- Abstract summary: We propose an adversarial attack demo named peak suppression (PS) by suppressing the values of peak elements in the features of the data.
Experiment results show that PS and well-designed gaussian blurring can form adversarial attacks that completely change classification results of a well-trained target network.
- Score: 7.540176446791261
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing pixel-level adversarial attacks on neural networks may be deficient
in real scenarios, since pixel-level changes on the data cannot be fully
delivered to the neural network after camera capture and multiple image
preprocessing steps. In contrast, in this paper, we argue from another
perspective that gaussian blurring, a common technique of image preprocessing,
can be aggressive itself in specific occasions, thus exposing the network to
real-world adversarial attacks. We first propose an adversarial attack demo
named peak suppression (PS) by suppressing the values of peak elements in the
features of the data. Based on the blurring spirit of PS, we further apply
gaussian blurring to the data, to investigate the potential influence and
threats of gaussian blurring to performance of the network. Experiment results
show that PS and well-designed gaussian blurring can form adversarial attacks
that completely change classification results of a well-trained target network.
With the strong physical significance and wide applications of gaussian
blurring, the proposed approach will also be capable of conducting real world
attacks.
Related papers
- SAIF: Sparse Adversarial and Imperceptible Attack Framework [7.025774823899217]
We propose a novel attack technique called Sparse Adversarial and Interpretable Attack Framework (SAIF)
Specifically, we design imperceptible attacks that contain low-magnitude perturbations at a small number of pixels and leverage these sparse attacks to reveal the vulnerability of classifiers.
SAIF computes highly imperceptible and interpretable adversarial examples, and outperforms state-of-the-art sparse attack methods on the ImageNet dataset.
arXiv Detail & Related papers (2022-12-14T20:28:50Z) - Meta Adversarial Perturbations [66.43754467275967]
We show the existence of a meta adversarial perturbation (MAP)
MAP causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update.
We show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
arXiv Detail & Related papers (2021-11-19T16:01:45Z) - Hiding Images into Images with Real-world Robustness [21.328984859163956]
We introduce a generative network based method for hiding images into images while assuring high-quality extraction.
An embedding network is sequentially decoupling with an attack layer, a decoupling network and an image extraction network.
We are the first to robustly hide three secret images.
arXiv Detail & Related papers (2021-10-12T02:20:34Z) - Sparse and Imperceptible Adversarial Attack via a Homotopy Algorithm [93.80082636284922]
Sparse adversarial attacks can fool deep networks (DNNs) by only perturbing a few pixels.
Recent efforts combine it with another l_infty perturbation on magnitudes.
We propose a homotopy algorithm to tackle the sparsity and neural perturbation framework.
arXiv Detail & Related papers (2021-06-10T20:11:36Z) - Adversarial Imaging Pipelines [28.178120782659878]
We develop an attack that deceives a specific camera ISP while leaving others intact.
We validate the proposed method using recent state-of-the-art automotive hardware ISPs.
arXiv Detail & Related papers (2021-02-07T06:10:54Z) - PICA: A Pixel Correlation-based Attentional Black-box Adversarial Attack [37.15301296824337]
We propose a pixel correlation-based attentional black-box adversarial attack, termed as PICA.
PICA is more efficient to generate high-resolution adversarial examples compared with the existing black-box attacks.
arXiv Detail & Related papers (2021-01-19T09:53:52Z) - Patch-wise++ Perturbation for Adversarial Targeted Attacks [132.58673733817838]
We propose a patch-wise iterative method (PIM) aimed at crafting adversarial examples with high transferability.
Specifically, we introduce an amplification factor to the step size in each iteration, and one pixel's overall gradient overflowing the $epsilon$-constraint is properly assigned to its surrounding regions.
Compared with the current state-of-the-art attack methods, we significantly improve the success rate by 35.9% for defense models and 32.7% for normally trained models.
arXiv Detail & Related papers (2020-12-31T08:40:42Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Towards Achieving Adversarial Robustness by Enforcing Feature
Consistency Across Bit Planes [51.31334977346847]
We train networks to form coarse impressions based on the information in higher bit planes, and use the lower bit planes only to refine their prediction.
We demonstrate that, by imposing consistency on the representations learned across differently quantized images, the adversarial robustness of networks improves significantly.
arXiv Detail & Related papers (2020-04-01T09:31:10Z) - Self-Supervised Linear Motion Deblurring [112.75317069916579]
Deep convolutional neural networks are state-of-the-art for image deblurring.
We present a differentiable reblur model for self-supervised motion deblurring.
Our experiments demonstrate that self-supervised single image deblurring is really feasible.
arXiv Detail & Related papers (2020-02-10T20:15:21Z) - Watch out! Motion is Blurring the Vision of Your Deep Neural Networks [34.51270823371404]
State-of-the-art deep neural networks (DNNs) are vulnerable against adversarial examples with additive random-like noise perturbations.
We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples.
A comprehensive evaluation on the NeurIPS'17 adversarial competition dataset demonstrates the effectiveness of ABBA.
arXiv Detail & Related papers (2020-02-10T02:33:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.