Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors
- URL: http://arxiv.org/abs/2104.11101v1
- Date: Thu, 22 Apr 2021 14:36:08 GMT
- Title: Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors
- Authors: Arman Maesumi and Mingkang Zhu and Yi Wang and Tianlong Chen and
Zhangyang Wang and Chandrajit Bajaj
- Abstract summary: This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
- Score: 72.7633556669675
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper presents a novel patch-based adversarial attack pipeline that
trains adversarial patches on 3D human meshes. We sample triangular faces on a
reference human mesh, and create an adversarial texture atlas over those faces.
The adversarial texture is transferred to human meshes in various poses, which
are rendered onto a collection of real-world background images. Contrary to the
traditional patch-based adversarial attacks, where prior work attempts to fool
trained object detectors using appended adversarial patches, this new form of
attack is mapped into the 3D object world and back-propagated to the texture
atlas through differentiable rendering. As such, the adversarial patch is
trained under deformation consistent with real-world materials. In addition,
and unlike existing adversarial patches, our new 3D adversarial patch is shown
to fool state-of-the-art deep object detectors robustly under varying views,
potentially leading to an attacking scheme that is persistently strong in the
physical world.
Related papers
- Generating Visually Realistic Adversarial Patch [5.41648734119775]
A high-quality adversarial patch should be realistic, position irrelevant, and printable to be deployed in the physical world.
We propose an effective attack called VRAP, to generate visually realistic adversarial patches.
VRAP constrains the patch in the neighborhood of a real image to ensure the visual reality, optimize the patch at the poorest position for position irrelevance, and adopts Total Variance loss as well as gamma transformation to make the generated patch printable without losing information.
arXiv Detail & Related papers (2023-12-05T11:07:39Z) - AdvMono3D: Advanced Monocular 3D Object Detection with Depth-Aware
Robust Adversarial Training [64.14759275211115]
We propose a depth-aware robust adversarial training method for monocular 3D object detection, dubbed DART3D.
Our adversarial training approach capitalizes on the inherent uncertainty, enabling the model to significantly improve its robustness against adversarial attacks.
arXiv Detail & Related papers (2023-09-03T07:05:32Z) - DTA: Physical Camouflage Attacks using Differentiable Transformation
Network [0.4215938932388722]
We propose a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models.
Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistics and the benefit of white-box access.
Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models.
arXiv Detail & Related papers (2022-03-18T10:15:02Z) - On the Real-World Adversarial Robustness of Real-Time Semantic
Segmentation Models for Autonomous Driving [59.33715889581687]
The existence of real-world adversarial examples (commonly in the form of patches) poses a serious threat for the use of deep learning models in safety-critical computer vision tasks.
This paper presents an evaluation of the robustness of semantic segmentation models when attacked with different types of adversarial patches.
A novel loss function is proposed to improve the capabilities of attackers in inducing a misclassification of pixels.
arXiv Detail & Related papers (2022-01-05T22:33:43Z) - FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack [5.476797414272598]
We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
arXiv Detail & Related papers (2021-09-15T10:17:12Z) - DensePose 3D: Lifting Canonical Surface Maps of Articulated Objects to
the Third Dimension [71.71234436165255]
We contribute DensePose 3D, a method that can learn such reconstructions in a weakly supervised fashion from 2D image annotations only.
Because it does not require 3D scans, DensePose 3D can be used for learning a wide range of articulated categories such as different animal species.
We show significant improvements compared to state-of-the-art non-rigid structure-from-motion baselines on both synthetic and real data on categories of humans and animals.
arXiv Detail & Related papers (2021-08-31T18:33:55Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Inconspicuous Adversarial Patches for Fooling Image Recognition Systems
on Mobile Devices [8.437172062224034]
A variant of adversarial examples, called adversarial patch, draws researchers' attention due to its strong attack abilities.
We propose an approach to generate adversarial patches with one single image.
Our approach shows the strong attack abilities in white-box settings and the excellent transferability in black-box settings.
arXiv Detail & Related papers (2021-06-29T09:39:34Z) - Can 3D Adversarial Logos Cloak Humans? [115.20718041659357]
This paper presents a new 3D adversarial logo attack.
We construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo.
The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position.
Unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations.
arXiv Detail & Related papers (2020-06-25T18:34:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.