Feasibility of Inconspicuous GAN-generated Adversarial Patches against
Object Detection
- URL: http://arxiv.org/abs/2207.07347v1
- Date: Fri, 15 Jul 2022 08:48:40 GMT
- Title: Feasibility of Inconspicuous GAN-generated Adversarial Patches against
Object Detection
- Authors: Svetlana Pavlitskaya, Bianca-Marina Cod\u{a}u and J. Marius Z\"ollner
- Abstract summary: In this work, we have evaluated the existing approaches to generate inconspicuous patches.
We have evaluated two approaches to generate naturalistic patches: by incorporating patch generation into the GAN training process and by using the pretrained GAN.
Our experiments have shown, that using a pre-trained GAN helps to gain realistic-looking patches while preserving the performance similar to conventional adversarial patches.
- Score: 3.395452700023097
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Standard approaches for adversarial patch generation lead to noisy
conspicuous patterns, which are easily recognizable by humans. Recent research
has proposed several approaches to generate naturalistic patches using
generative adversarial networks (GANs), yet only a few of them were evaluated
on the object detection use case. Moreover, the state of the art mostly focuses
on suppressing a single large bounding box in input by overlapping it with the
patch directly. Suppressing objects near the patch is a different, more complex
task. In this work, we have evaluated the existing approaches to generate
inconspicuous patches. We have adapted methods, originally developed for
different computer vision tasks, to the object detection use case with YOLOv3
and the COCO dataset. We have evaluated two approaches to generate naturalistic
patches: by incorporating patch generation into the GAN training process and by
using the pretrained GAN. For both cases, we have assessed a trade-off between
performance and naturalistic patch appearance. Our experiments have shown, that
using a pre-trained GAN helps to gain realistic-looking patches while
preserving the performance similar to conventional adversarial patches.
Related papers
- GeneralAD: Anomaly Detection Across Domains by Attending to Distorted Features [68.14842693208465]
GeneralAD is an anomaly detection framework designed to operate in semantic, near-distribution, and industrial settings.
We propose a novel self-supervised anomaly generation module that employs straightforward operations like noise addition and shuffling to patch features.
We extensively evaluated our approach on ten datasets, achieving state-of-the-art results in six and on-par performance in the remaining.
arXiv Detail & Related papers (2024-07-17T09:27:41Z) - Environmental Matching Attack Against Unmanned Aerial Vehicles Object Detection [37.77615360932841]
Object detection techniques for Unmanned Aerial Vehicles rely on Deep Neural Networks (DNNs)
adversarial patches generated by existing algorithms in the UAV domain pay very little attention to the naturalness of adversarial patches.
We propose a new method named Environmental Matching Attack(EMA) to address the issue of optimizing the adversarial patch under the constraints of color.
arXiv Detail & Related papers (2024-05-13T09:56:57Z) - PAD: Patch-Agnostic Defense against Adversarial Patch Attacks [36.865204327754626]
Adversarial patch attacks present a significant threat to real-world object detectors.
We show two inherent characteristics of adversarial patches, semantic independence and spatial heterogeneity.
We propose PAD, a novel adversarial patch localization and removal method that does not require prior knowledge or additional training.
arXiv Detail & Related papers (2024-04-25T09:32:34Z) - Diffusion to Confusion: Naturalistic Adversarial Patch Generation Based
on Diffusion Model for Object Detector [18.021582628066554]
We propose a novel naturalistic adversarial patch generation method based on the diffusion models (DM)
We are the first to propose DM-based naturalistic adversarial patch generation for object detectors.
arXiv Detail & Related papers (2023-07-16T15:22:30Z) - DOAD: Decoupled One Stage Action Detection Network [77.14883592642782]
Localizing people and recognizing their actions from videos is a challenging task towards high-level video understanding.
Existing methods are mostly two-stage based, with one stage for person bounding box generation and the other stage for action recognition.
We present a decoupled one-stage network dubbed DOAD, to improve the efficiency for-temporal action detection.
arXiv Detail & Related papers (2023-04-01T08:06:43Z) - Threatening Patch Attacks on Object Detection in Optical Remote Sensing
Images [55.09446477517365]
Advanced Patch Attacks (PAs) on object detection in natural images have pointed out the great safety vulnerability in methods based on deep neural networks.
We propose a more Threatening PA without the scarification of the visual quality, dubbed TPA.
To the best of our knowledge, this is the first attempt to study the PAs on object detection in O-RSIs, and we hope this work can get our readers interested in studying this topic.
arXiv Detail & Related papers (2023-02-13T02:35:49Z) - Plug-and-Play Few-shot Object Detection with Meta Strategy and Explicit
Localization Inference [78.41932738265345]
This paper proposes a plug detector that can accurately detect the objects of novel categories without fine-tuning process.
We introduce two explicit inferences into the localization process to reduce its dependence on annotated data.
It shows a significant lead in both efficiency, precision, and recall under varied evaluation protocols.
arXiv Detail & Related papers (2021-10-26T03:09:57Z) - Generating Adversarial yet Inconspicuous Patches with a Single Image [15.217367754000913]
We propose an approach to gen-erate adversarial yet inconspicuous patches with onesingle image.
In our approach, adversarial patches areproduced in a coarse-to-fine way with multiple scalesof generators and discriminators.
Our ap-proach shows strong attacking ability in both the white-box and black-box setting.
arXiv Detail & Related papers (2020-09-21T11:56:01Z) - Bias-based Universal Adversarial Patch Attack for Automatic Check-out [59.355948824578434]
Adversarial examples are inputs with imperceptible perturbations that easily misleading deep neural networks(DNNs)
Existing strategies failed to generate adversarial patches with strong generalization ability.
This paper proposes a bias-based framework to generate class-agnostic universal adversarial patches with strong generalization ability.
arXiv Detail & Related papers (2020-05-19T07:38:54Z) - Frustratingly Simple Few-Shot Object Detection [98.42824677627581]
We find that fine-tuning only the last layer of existing detectors on rare classes is crucial to the few-shot object detection task.
Such a simple approach outperforms the meta-learning methods by roughly 220 points on current benchmarks.
arXiv Detail & Related papers (2020-03-16T00:29:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.