Can 3D Adversarial Logos Cloak Humans?
- URL: http://arxiv.org/abs/2006.14655v2
- Date: Fri, 27 Nov 2020 07:18:55 GMT
- Title: Can 3D Adversarial Logos Cloak Humans?
- Authors: Yi Wang, Jingyang Zhou, Tianlong Chen, Sijia Liu, Shiyu Chang,
Chandrajit Bajaj, Zhangyang Wang
- Abstract summary: This paper presents a new 3D adversarial logo attack.
We construct an arbitrary shape logo from a 2D texture image and map this image into a 3D adversarial logo.
The resulting 3D adversarial logo is then viewed as an adversarial texture enabling easy manipulation of its shape and position.
Unlike existing adversarial patches, our new 3D adversarial logo is shown to fool state-of-the-art deep object detectors robustly under model rotations.
- Score: 115.20718041659357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the trend of adversarial attacks, researchers attempt to fool trained
object detectors in 2D scenes. Among many of them, an intriguing new form of
attack with potential real-world usage is to append adversarial patches (e.g.
logos) to images. Nevertheless, much less have we known about adversarial
attacks from 3D rendering views, which is essential for the attack to be
persistently strong in the physical world. This paper presents a new 3D
adversarial logo attack: we construct an arbitrary shape logo from a 2D texture
image and map this image into a 3D adversarial logo via a texture mapping
called logo transformation. The resulting 3D adversarial logo is then viewed as
an adversarial texture enabling easy manipulation of its shape and position.
This greatly extends the versatility of adversarial training for computer
graphics synthesized imagery. Contrary to the traditional adversarial patch,
this new form of attack is mapped into the 3D object world and back-propagates
to the 2D image domain through differentiable rendering. In addition, and
unlike existing adversarial patches, our new 3D adversarial logo is shown to
fool state-of-the-art deep object detectors robustly under model rotations,
leading to one step further for realistic attacks in the physical world. Our
codes are available at https://github.com/TAMU-VITA/3D_Adversarial_Logo.
Related papers
- Toon3D: Seeing Cartoons from a New Perspective [52.85312338932685]
We focus our analysis on hand-drawn images from cartoons and anime.
Many cartoons are created by artists without a 3D rendering engine, which means that any new image of a scene is hand-drawn.
We correct for 2D drawing inconsistencies to recover a plausible 3D structure such that the newly warped drawings are consistent with each other.
arXiv Detail & Related papers (2024-05-16T17:59:51Z) - Adv3D: Generating 3D Adversarial Examples for 3D Object Detection in Driving Scenarios with NeRF [19.55666600076762]
Adv3D is the first exploration of modeling adversarial examples as Neural Radiance Fields (NeRFs)
NeRFs provide photorealistic appearances and 3D accurate generation, yielding a more realistic and realizable adversarial example.
We propose primitive-aware sampling and semantic-guided regularization that enable 3D patch attacks with camouflage adversarial texture.
arXiv Detail & Related papers (2023-09-04T04:29:01Z) - ARTIC3D: Learning Robust Articulated 3D Shapes from Noisy Web Image
Collections [71.46546520120162]
Estimating 3D articulated shapes like animal bodies from monocular images is inherently challenging.
We propose ARTIC3D, a self-supervised framework to reconstruct per-instance 3D shapes from a sparse image collection in-the-wild.
We produce realistic animations by fine-tuning the rendered shape and texture under rigid part transformations.
arXiv Detail & Related papers (2023-06-07T17:47:50Z) - Self-Supervised Geometry-Aware Encoder for Style-Based 3D GAN Inversion [115.82306502822412]
StyleGAN has achieved great progress in 2D face reconstruction and semantic editing via image inversion and latent editing.
A corresponding generic 3D GAN inversion framework is still missing, limiting the applications of 3D face reconstruction and semantic editing.
We study the challenging problem of 3D GAN inversion where a latent code is predicted given a single face image to faithfully recover its 3D shapes and detailed textures.
arXiv Detail & Related papers (2022-12-14T18:49:50Z) - 3DHumanGAN: 3D-Aware Human Image Generation with 3D Pose Mapping [37.14866512377012]
3DHumanGAN is a 3D-aware generative adversarial network that synthesizes photorealistic images of full-body humans.
We propose a novel generator architecture in which a 2D convolutional backbone is modulated by a 3D pose mapping network.
arXiv Detail & Related papers (2022-12-14T17:59:03Z) - AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars [71.00322191446203]
2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
arXiv Detail & Related papers (2022-10-12T17:59:56Z) - Style Agnostic 3D Reconstruction via Adversarial Style Transfer [23.304453155586312]
Reconstructing the 3D geometry of an object from an image is a major challenge in computer vision.
We propose an approach that enables a differentiable-based learning of 3D objects from images with backgrounds.
arXiv Detail & Related papers (2021-10-20T21:24:44Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.