Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks
against Object Detection
- URL: http://arxiv.org/abs/2209.13353v1
- Date: Tue, 27 Sep 2022 12:59:19 GMT
- Title: Suppress with a Patch: Revisiting Universal Adversarial Patch Attacks
against Object Detection
- Authors: Svetlana Pavlitskaya, Jonas Hendl, Sebastian Kleim, Leopold M\"uller,
Fabian Wylczoch and J. Marius Z\"ollner
- Abstract summary: Adversarial patch-based attacks aim to fool a neural network with an intentionally generated noise.
In this work, we perform an in-depth analysis of different patch generation parameters.
Experiments have shown, that inserting a patch inside a window of increasing size during training leads to a significant increase in attack strength.
- Score: 2.577744341648085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial patch-based attacks aim to fool a neural network with an
intentionally generated noise, which is concentrated in a particular region of
an input image. In this work, we perform an in-depth analysis of different
patch generation parameters, including initialization, patch size, and
especially positioning a patch in an image during training. We focus on the
object vanishing attack and run experiments with YOLOv3 as a model under attack
in a white-box setting and use images from the COCO dataset. Our experiments
have shown, that inserting a patch inside a window of increasing size during
training leads to a significant increase in attack strength compared to a fixed
position. The best results were obtained when a patch was positioned randomly
during training, while patch position additionally varied within a batch.
Related papers
- Towards Robust Image Stitching: An Adaptive Resistance Learning against
Compatible Attacks [66.98297584796391]
Image stitching seamlessly integrates images captured from varying perspectives into a single wide field-of-view image.
Given a pair of captured images, subtle perturbations and distortions which go unnoticed by the human visual system tend to attack the correspondence matching.
This paper presents the first attempt to improve the robustness of image stitching against adversarial attacks.
arXiv Detail & Related papers (2024-02-25T02:36:33Z) - Leveraging Local Patch Differences in Multi-Object Scenes for Generative
Adversarial Attacks [48.66027897216473]
We tackle a more practical problem of generating adversarial perturbations using multi-object (i.e., multiple dominant objects) images.
We propose a novel generative attack (called Local Patch Difference or LPD-Attack) where a novel contrastive loss function uses the aforesaid local differences in feature space of multi-object scenes.
Our approach outperforms baseline generative attacks with highly transferable perturbations when evaluated under different white-box and black-box settings.
arXiv Detail & Related papers (2022-09-20T17:36:32Z) - Task-agnostic Defense against Adversarial Patch Attacks [25.15948648034204]
Adversarial patch attacks mislead neural networks by injecting adversarial pixels within a designated local region.
We present PatchZero, a task-agnostic defense against white-box adversarial patches.
Our method achieves SOTA robust accuracy without any degradation in the benign performance.
arXiv Detail & Related papers (2022-07-05T03:49:08Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - Patch Attack Invariance: How Sensitive are Patch Attacks to 3D Pose? [7.717537870226507]
We develop a new metric called mean Attack Success over Transformations (mAST) to evaluate patch attack robustness and invariance.
We conduct a sensitivity analysis which provides important qualitative insights into attack effectiveness as a function of the 3D pose of a patch relative to the camera.
We provide new insights into the existence of a fundamental cutoff limit in patch attack effectiveness that depends on the extent of out-of-plane rotation angles.
arXiv Detail & Related papers (2021-08-16T17:02:38Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Patch-wise Attack for Fooling Deep Neural Network [153.59832333877543]
We propose a patch-wise iterative algorithm -- a black-box attack towards mainstream normally trained and defense models.
We significantly improve the success rate by 9.2% for defense models and 3.7% for normally trained models on average.
arXiv Detail & Related papers (2020-07-14T01:50:22Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - PatchAttack: A Black-box Texture-based Attack with Reinforcement
Learning [31.255179167694887]
Patch-based attacks introduce a perceptible but localized change to the input that induces misclassification.
Our proposed PatchAttack is query efficient and can break models for both targeted and non-targeted attacks.
arXiv Detail & Related papers (2020-04-12T19:31:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.