Delving into the pixels of adversarial samples
- URL: http://arxiv.org/abs/2106.10996v1
- Date: Mon, 21 Jun 2021 11:28:06 GMT
- Title: Delving into the pixels of adversarial samples
- Authors: Blerta Lindqvist
- Abstract summary: Knowing how image pixels are affected by adversarial attacks has the potential to lead us to better adversarial defenses.
We consider several ImageNet architectures, InceptionV3, VGG19 and ResNet50, as well as several strong attacks.
In particular, input pre-processing plays a previously overlooked role in the effect that attacks have on pixels.
- Score: 0.10152838128195464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite extensive research into adversarial attacks, we do not know how
adversarial attacks affect image pixels. Knowing how image pixels are affected
by adversarial attacks has the potential to lead us to better adversarial
defenses. Motivated by instances that we find where strong attacks do not
transfer, we delve into adversarial examples at pixel level to scrutinize how
adversarial attacks affect image pixel values. We consider several ImageNet
architectures, InceptionV3, VGG19 and ResNet50, as well as several strong
attacks. We find that attacks can have different effects at pixel level
depending on classifier architecture. In particular, input pre-processing plays
a previously overlooked role in the effect that attacks have on pixels. Based
on the insights of pixel-level examination, we find new ways to detect some of
the strongest current attacks.
Related papers
- Superpixel Attack: Enhancing Black-box Adversarial Attack with Image-driven Division Areas [1.1417805445492082]
Adversarial attacks are used to identify small perturbations that can lead to misclassifications.<n>A promising approach to black-box adversarial attacks is to repeat the process of extracting a specific image area and changing the perturbations added to it.<n>We propose applying superpixels instead, which achieve a good balance between color variance and compactness.
arXiv Detail & Related papers (2025-11-29T05:28:52Z) - Content-based Unrestricted Adversarial Attack [53.181920529225906]
We propose a novel unrestricted attack framework called Content-based Unrestricted Adversarial Attack.
By leveraging a low-dimensional manifold that represents natural images, we map the images onto the manifold and optimize them along its adversarial direction.
arXiv Detail & Related papers (2023-05-18T02:57:43Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Zero-Query Transfer Attacks on Context-Aware Object Detectors [95.18656036716972]
Adversarial attacks perturb images such that a deep neural network produces incorrect classification results.
A promising approach to defend against adversarial attacks on natural multi-object scenes is to impose a context-consistency check.
We present the first approach for generating context-consistent adversarial attacks that can evade the context-consistency check.
arXiv Detail & Related papers (2022-03-29T04:33:06Z) - Parallel Rectangle Flip Attack: A Query-based Black-box Attack against
Object Detection [89.08832589750003]
We propose a Parallel Rectangle Flip Attack (PRFA) via random search to avoid sub-optimal detection near the attacked region.
Our method can effectively and efficiently attack various popular object detectors, including anchor-based and anchor-free, and generate transferable adversarial examples.
arXiv Detail & Related papers (2022-01-22T06:00:17Z) - Chromatic and spatial analysis of one-pixel attacks against an image
classifier [0.0]
This research presents ways to analyze chromatic and spatial distributions of one-pixel attacks.
We show that the more effective attacks change the color of the pixel more, and that the successful attacks are situated at the center of the images.
arXiv Detail & Related papers (2021-05-28T12:21:58Z) - Learning to Attack with Fewer Pixels: A Probabilistic Post-hoc Framework
for Refining Arbitrary Dense Adversarial Attacks [21.349059923635515]
adversarial evasion attacks are reported to be susceptible to deep neural network image classifiers.
We propose a probabilistic post-hoc framework that refines given dense attacks by significantly reducing the number of perturbed pixels.
Our framework performs adversarial attacks much faster than existing sparse attacks.
arXiv Detail & Related papers (2020-10-13T02:51:10Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Towards Feature Space Adversarial Attack [18.874224858723494]
We propose a new adversarial attack to Deep Neural Networks for image classification.
Our attack focuses on perturbing abstract features, more specifically, features that denote styles.
We show that our attack can generate adversarial samples that are more natural-looking than the state-of-the-art attacks.
arXiv Detail & Related papers (2020-04-26T13:56:31Z) - A Black-box Adversarial Attack Strategy with Adjustable Sparsity and
Generalizability for Deep Image Classifiers [16.951363298896638]
Black-box adversarial perturbations are more practical for real-world applications.
We propose the DEceit algorithm for constructing effective universal pixel-restricted perturbations.
We find that perturbing only about 10% of the pixels in an image using DEceit achieves a commendable and highly transferable Fooling Rate.
arXiv Detail & Related papers (2020-04-24T19:42:00Z) - Backdooring and Poisoning Neural Networks with Image-Scaling Attacks [15.807243762876901]
We propose a novel strategy for hiding backdoor and poisoning attacks.
Our approach builds on a recent class of attacks against image scaling.
We show that backdoors and poisoning work equally well when combined with image-scaling attacks.
arXiv Detail & Related papers (2020-03-19T08:59:50Z) - Deflecting Adversarial Attacks [94.85315681223702]
We present a new approach towards ending this cycle where we "deflect" adversarial attacks by causing the attacker to produce an input that resembles the attack's target class.
We first propose a stronger defense based on Capsule Networks that combines three detection mechanisms to achieve state-of-the-art detection performance.
arXiv Detail & Related papers (2020-02-18T06:59:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.