Generative Dynamic Patch Attack
- URL: http://arxiv.org/abs/2111.04266v1
- Date: Mon, 8 Nov 2021 04:15:34 GMT
- Title: Generative Dynamic Patch Attack
- Authors: Xiang Li, Shihao Ji
- Abstract summary: We propose an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA)
GDPA generates both patch pattern and patch location adversarially for each input image.
Experiments on VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack success rates than state-of-the-art patch attacks.
- Score: 6.1863763890100065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patch attack is a family of attack algorithms that perturb a part
of image to fool a deep neural network model. Existing patch attacks mostly
consider injecting adversarial patches at input-agnostic locations: either a
predefined location or a random location. This attack setup may be sufficient
for attack but has considerable limitations when using it for adversarial
training. Thus, robust models trained with existing patch attacks cannot
effectively defend other adversarial attacks. In this paper, we first propose
an end-to-end patch attack algorithm, Generative Dynamic Patch Attack (GDPA),
which generates both patch pattern and patch location adversarially for each
input image. We show that GDPA is a generic attack framework that can produce
dynamic/static and visible/invisible patches with a few configuration changes.
Secondly, GDPA can be readily integrated for adversarial training to improve
model robustness to various adversarial attacks. Extensive experiments on
VGGFace, Traffic Sign and ImageNet show that GDPA achieves higher attack
success rates than state-of-the-art patch attacks, while adversarially trained
model with GDPA demonstrates superior robustness to adversarial patch attacks
than competing methods. Our source code can be found at
https://github.com/lxuniverse/gdpa.
Related papers
- Versatile Defense Against Adversarial Attacks on Image Recognition [2.9980620769521513]
Defending against adversarial attacks in a real-life setting can be compared to the way antivirus software works.
It appears that a defense method based on image-to-image translation may be capable of this.
The trained model has successfully improved the classification accuracy from nearly zero to an average of 86%.
arXiv Detail & Related papers (2024-03-13T01:48:01Z) - Guidance Through Surrogate: Towards a Generic Diagnostic Attack [101.36906370355435]
We develop a guided mechanism to avoid local minima during attack optimization, leading to a novel attack dubbed Guided Projected Gradient Attack (G-PGA)
Our modified attack does not require random restarts, large number of attack iterations or search for an optimal step-size.
More than an effective attack, G-PGA can be used as a diagnostic tool to reveal elusive robustness due to gradient masking in adversarial defenses.
arXiv Detail & Related papers (2022-12-30T18:45:23Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Composite Adversarial Attacks [57.293211764569996]
Adversarial attack is a technique for deceiving Machine Learning (ML) models.
In this paper, a new procedure called Composite Adrial Attack (CAA) is proposed for automatically searching the best combination of attack algorithms.
CAA beats 10 top attackers on 11 diverse defenses with less elapsed time.
arXiv Detail & Related papers (2020-12-10T03:21:16Z) - Robustness Out of the Box: Compositional Representations Naturally
Defend Against Black-Box Patch Attacks [11.429509031463892]
Patch-based adversarial attacks introduce a perceptible but localized change to the input that induces misclassification.
In this work, we study two different approaches for defending against black-box patch attacks.
We find that adversarial training has limited effectiveness against state-of-the-art location-optimized patch attacks.
arXiv Detail & Related papers (2020-12-01T15:04:23Z) - Attack Agnostic Adversarial Defense via Visual Imperceptible Bound [70.72413095698961]
This research aims to design a defense model that is robust within a certain bound against both seen and unseen adversarial attacks.
The proposed defense model is evaluated on the MNIST, CIFAR-10, and Tiny ImageNet databases.
The proposed algorithm is attack agnostic, i.e. it does not require any knowledge of the attack algorithm.
arXiv Detail & Related papers (2020-10-25T23:14:26Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Certified Defenses for Adversarial Patches [72.65524549598126]
Adversarial patch attacks are among the most practical threat models against real-world computer vision systems.
This paper studies certified and empirical defenses against patch attacks.
arXiv Detail & Related papers (2020-03-14T19:57:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.