Adversarial Texture for Fooling Person Detectors in the Physical World
- URL: http://arxiv.org/abs/2203.03373v2
- Date: Tue, 8 Mar 2022 14:29:07 GMT
- Title: Adversarial Texture for Fooling Person Detectors in the Physical World
- Authors: Zhanhao Hu, Siyuan Huang, Xiaopei Zhu, Xiaolin Hu, Fuchun Sun, Bo
Zhang
- Abstract summary: Adrial Texture (AdvTexture) can cover clothes with arbitrary shapes so that people wearing such clothes can hide from person detectors from different viewing angles.
We propose a generative method, named Toroidal-Cropping-based Expandable Generative Attack (TC-EGA) to craft AdvTexture with repetitive structures.
Experiments showed that these clothes could fool person detectors in the physical world.
- Score: 38.39939625606267
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nowadays, cameras equipped with AI systems can capture and analyze images to
detect people automatically. However, the AI system can make mistakes when
receiving deliberately designed patterns in the real world, i.e., physical
adversarial examples. Prior works have shown that it is possible to print
adversarial patches on clothes to evade DNN-based person detectors. However,
these adversarial examples could have catastrophic drops in the attack success
rate when the viewing angle (i.e., the camera's angle towards the object)
changes. To perform a multi-angle attack, we propose Adversarial Texture
(AdvTexture). AdvTexture can cover clothes with arbitrary shapes so that people
wearing such clothes can hide from person detectors from different viewing
angles. We propose a generative method, named Toroidal-Cropping-based
Expandable Generative Attack (TC-EGA), to craft AdvTexture with repetitive
structures. We printed several pieces of cloth with AdvTexure and then made
T-shirts, skirts, and dresses in the physical world. Experiments showed that
these clothes could fool person detectors in the physical world.
Related papers
- DiffPatch: Generating Customizable Adversarial Patches using Diffusion Model [88.14122962946858]
We propose a novel diffusion-based customizable patch generation framework termed DiffPatch.
Our approach enables users to utilize a reference image as the source, rather than starting from random noise.
We have created a physical adversarial T-shirt dataset, AdvPatch-1K, specifically targeting YOLOv5s.
arXiv Detail & Related papers (2024-12-02T12:30:35Z) - Turn Fake into Real: Adversarial Head Turn Attacks Against Deepfake
Detection [58.1263969438364]
We propose adversarial head turn (AdvHeat) as the first attempt at 3D adversarial face views against deepfake detectors.
Experiments validate the vulnerability of various detectors to AdvHeat in realistic, black-box scenarios.
Additional analyses demonstrate that AdvHeat is better than conventional attacks on both the cross-detector transferability and robustness to defenses.
arXiv Detail & Related papers (2023-09-03T07:01:34Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - Physically Realizable Natural-Looking Clothing Textures Evade Person Detectors via 3D Modeling [19.575338491567813]
We craft adversarial texture for clothes based on 3D modeling.
We propose adversarial camouflage textures (AdvCaT) that resemble one kind of the typical textures of daily clothes.
We printed the developed 3D texture pieces on fabric materials and tailored them into T-shirts and trousers.
arXiv Detail & Related papers (2023-07-04T15:31:03Z) - BATT: Backdoor Attack with Transformation-based Triggers [72.61840273364311]
Deep neural networks (DNNs) are vulnerable to backdoor attacks.
Backdoor adversaries inject hidden backdoors that can be activated by adversary-specified trigger patterns.
One recent research revealed that most of the existing attacks failed in the real physical world.
arXiv Detail & Related papers (2022-11-02T16:03:43Z) - Empirical Evaluation of Physical Adversarial Patch Attacks Against
Overhead Object Detection Models [2.2588953434934416]
Adversarial patches are images designed to fool otherwise well-performing neural network-based computer vision models.
Recent work has demonstrated that these attacks can successfully transfer to the physical world.
We further test the efficacy of adversarial patch attacks in the physical world under more challenging conditions.
arXiv Detail & Related papers (2022-06-25T20:05:11Z) - FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack [5.476797414272598]
We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
arXiv Detail & Related papers (2021-09-15T10:17:12Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - 3D Invisible Cloak [12.48087784777591]
We propose a novel physical stealth attack against the person detectors in real world.
The proposed method generates an adversarial patch, and prints it on real clothes to make a 3D invisible cloak.
arXiv Detail & Related papers (2020-11-27T12:43:04Z) - DPAttack: Diffused Patch Attacks against Universal Object Detection [66.026630370248]
Adversarial attacks against object detection can be divided into two categories, whole-pixel attacks and patch attacks.
We propose a diffused patch attack (textbfDPAttack) to fool object detectors by diffused patches of asteroid-shaped or grid-shape.
Experiments show that our DPAttack can successfully fool most object detectors with diffused patches.
arXiv Detail & Related papers (2020-10-16T04:48:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.