DAP: A Dynamic Adversarial Patch for Evading Person Detectors
- URL: http://arxiv.org/abs/2305.11618v2
- Date: Mon, 20 Nov 2023 11:18:50 GMT
- Title: DAP: A Dynamic Adversarial Patch for Evading Person Detectors
- Authors: Amira Guesmi, Ruitian Ding, Muhammad Abdullah Hanif, Ihsen Alouani,
Muhammad Shafique
- Abstract summary: This paper introduces a novel approach that produces a Dynamic Adversarial Patch (DAP)
DAP maintains a naturalistic appearance while optimizing attack efficiency and robustness to real-world transformations.
Experimental results demonstrate that the proposed approach outperforms state-of-the-art attacks.
- Score: 8.187375378049353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Patch-based adversarial attacks were proven to compromise the robustness and
reliability of computer vision systems. However, their conspicuous and easily
detectable nature challenge their practicality in real-world setting. To
address this, recent work has proposed using Generative Adversarial Networks
(GANs) to generate naturalistic patches that may not attract human attention.
However, such approaches suffer from a limited latent space making it
challenging to produce a patch that is efficient, stealthy, and robust to
multiple real-world transformations. This paper introduces a novel approach
that produces a Dynamic Adversarial Patch (DAP) designed to overcome these
limitations. DAP maintains a naturalistic appearance while optimizing attack
efficiency and robustness to real-world transformations. The approach involves
redefining the optimization problem and introducing a novel objective function
that incorporates a similarity metric to guide the patch's creation. Unlike
GAN-based techniques, the DAP directly modifies pixel values within the patch,
providing increased flexibility and adaptability to multiple transformations.
Furthermore, most clothing-based physical attacks assume static objects and
ignore the possible transformations caused by non-rigid deformation due to
changes in a person's pose. To address this limitation, a 'Creases
Transformation' (CT) block is introduced, enhancing the patch's resilience to a
variety of real-world distortions. Experimental results demonstrate that the
proposed approach outperforms state-of-the-art attacks, achieving a success
rate of up to 82.28% in the digital world when targeting the YOLOv7 detector
and 65% in the physical world when targeting YOLOv3tiny detector deployed in
edge-based smart cameras.
Related papers
- Towards Robust Semantic Segmentation against Patch-based Attack via Attention Refinement [68.31147013783387]
We observe that the attention mechanism is vulnerable to patch-based adversarial attacks.
In this paper, we propose a Robust Attention Mechanism (RAM) to improve the robustness of the semantic segmentation model.
arXiv Detail & Related papers (2024-01-03T13:58:35Z) - MVPatch: More Vivid Patch for Adversarial Camouflaged Attacks on Object Detectors in the Physical World [7.1343035828597685]
We introduce generalization theory into the context of Adversarial Patches (APs)
We propose a Dual-Perception-Based Framework (DPBF) to generate the More Vivid Patch (MVPatch), which enhances transferability, stealthiness, and practicality.
MVPatch achieves superior transferability and a natural appearance in both digital and physical domains, underscoring its effectiveness and stealthiness.
arXiv Detail & Related papers (2023-12-29T01:52:22Z) - DOEPatch: Dynamically Optimized Ensemble Model for Adversarial Patches Generation [12.995762461474856]
We introduce the concept of energy and treat the adversarial patches generation process as an optimization of the adversarial patches to minimize the total energy of the person'' category.
By adopting adversarial training, we construct a dynamically optimized ensemble model.
We carried out six sets of comparative experiments and tested our algorithm on five mainstream object detection models.
arXiv Detail & Related papers (2023-12-28T08:58:13Z) - Unified Adversarial Patch for Visible-Infrared Cross-modal Attacks in
the Physical World [11.24237636482709]
We design a unified adversarial patch that can perform cross-modal physical attacks, achieving evasion in both modalities simultaneously with a single patch.
We propose a novel boundary-limited shape optimization approach that aims to achieve compact and smooth shapes for the adversarial patch.
Our method is evaluated against several state-of-the-art object detectors, achieving an Attack Success Rate (ASR) of over 80%.
arXiv Detail & Related papers (2023-07-27T08:14:22Z) - AdvART: Adversarial Art for Camouflaged Object Detection Attacks [7.7889972735711925]
We propose a novel approach to generate naturalistic and inconspicuous adversarial patches.
Our technique is based on directly manipulating the pixel values in the patch, which gives higher flexibility and larger space.
Our attack achieves superior success rate of up to 91.19% and 72%, respectively, in the digital world and when deployed in smart cameras at the edge.
arXiv Detail & Related papers (2023-03-03T06:28:05Z) - Benchmarking Adversarial Patch Against Aerial Detection [11.591143898488312]
A novel adaptive-patch-based physical attack (AP-PA) framework is proposed.
AP-PA generates adversarial patches that are adaptive in both physical dynamics and varying scales.
We establish one of the first comprehensive, coherent, and rigorous benchmarks to evaluate the attack efficacy of adversarial patches on aerial detection tasks.
arXiv Detail & Related papers (2022-10-30T07:55:59Z) - Adv-Attribute: Inconspicuous and Transferable Adversarial Attack on Face
Recognition [111.1952945740271]
Adversarial Attributes (Adv-Attribute) is designed to generate inconspicuous and transferable attacks on face recognition.
Experiments on the FFHQ and CelebA-HQ datasets show that the proposed Adv-Attribute method achieves the state-of-the-art attacking success rates.
arXiv Detail & Related papers (2022-10-13T09:56:36Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.