DTA: Physical Camouflage Attacks using Differentiable Transformation
Network
- URL: http://arxiv.org/abs/2203.09831v1
- Date: Fri, 18 Mar 2022 10:15:02 GMT
- Title: DTA: Physical Camouflage Attacks using Differentiable Transformation
Network
- Authors: Naufal Suryanto, Yongsu Kim, Hyoeun Kang, Harashta Tatimma Larasati,
Youngyeo Yun, Thi-Thu-Huong Le, Hunmin Yang, Se-Yoon Oh, Howon Kim
- Abstract summary: We propose a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models.
Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistics and the benefit of white-box access.
Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models.
- Score: 0.4215938932388722
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To perform adversarial attacks in the physical world, many studies have
proposed adversarial camouflage, a method to hide a target object by applying
camouflage patterns on 3D object surfaces. For obtaining optimal physical
adversarial camouflage, previous studies have utilized the so-called neural
renderer, as it supports differentiability. However, existing neural renderers
cannot fully represent various real-world transformations due to a lack of
control of scene parameters compared to the legacy photo-realistic renderers.
In this paper, we propose the Differentiable Transformation Attack (DTA), a
framework for generating a robust physical adversarial pattern on a target
object to camouflage it against object detection models with a wide range of
transformations. It utilizes our novel Differentiable Transformation Network
(DTN), which learns the expected transformation of a rendered object when the
texture is changed while preserving the original properties of the target
object. Using our attack framework, an adversary can gain both the advantages
of the legacy photo-realistic renderers including various physical-world
transformations and the benefit of white-box access by offering
differentiability. Our experiments show that our camouflaged 3D vehicles can
successfully evade state-of-the-art object detection models in the
photo-realistic environment (i.e., CARLA on Unreal Engine). Furthermore, our
demonstration on a scaled Tesla Model 3 proves the applicability and
transferability of our method to the real world.
Related papers
- Flexible Physical Camouflage Generation Based on a Differential Approach [6.645986533504748]
This study introduces a novel approach to neural rendering, specifically tailored for adversarial camouflage.
Our method, named FPA, goes beyond traditional techniques by faithfully simulating lighting conditions and material variations.
Our findings highlight the versatility and efficacy of the FPA approach in adversarial camouflage applications.
arXiv Detail & Related papers (2024-02-21T07:15:16Z) - Towards Transferable Targeted 3D Adversarial Attack in the Physical World [34.36328985344749]
transferable targeted adversarial attacks could pose a greater threat to security-critical tasks.
We develop a novel framework named TT3D that could rapidly reconstruct from few multi-view images into Transferable Targeted 3D textured meshes.
Experimental results show that TT3D not only exhibits superior cross-model transferability but also maintains considerable adaptability across different renders and vision tasks.
arXiv Detail & Related papers (2023-12-15T06:33:14Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Camouflaged Image Synthesis Is All You Need to Boost Camouflaged
Detection [65.8867003376637]
We propose a framework for synthesizing camouflage data to enhance the detection of camouflaged objects in natural scenes.
Our approach employs a generative model to produce realistic camouflage images, which can be used to train existing object detection models.
Our framework outperforms the current state-of-the-art method on three datasets.
arXiv Detail & Related papers (2023-08-13T06:55:05Z) - Differentiable Blocks World: Qualitative 3D Decomposition by Rendering
Primitives [70.32817882783608]
We present an approach that produces a simple, compact, and actionable 3D world representation by means of 3D primitives.
Unlike existing primitive decomposition methods that rely on 3D input data, our approach operates directly on images.
We show that the resulting textured primitives faithfully reconstruct the input images and accurately model the visible 3D points.
arXiv Detail & Related papers (2023-07-11T17:58:31Z) - MoDA: Modeling Deformable 3D Objects from Casual Videos [84.29654142118018]
We propose neural dual quaternion blend skinning (NeuDBS) to achieve 3D point deformation without skin-collapsing artifacts.
In the endeavor to register 2D pixels across different frames, we establish a correspondence between canonical feature embeddings that encodes 3D points within the canonical space.
Our approach can reconstruct 3D models for humans and animals with better qualitative and quantitative performance than state-of-the-art methods.
arXiv Detail & Related papers (2023-04-17T13:49:04Z) - FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack [5.476797414272598]
We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
arXiv Detail & Related papers (2021-09-15T10:17:12Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - SMPLpix: Neural Avatars from 3D Human Models [56.85115800735619]
We bridge the gap between classic rendering and the latest generative networks operating in pixel space.
We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images.
We show the advantage over conventional differentiables both in terms of the level of photorealism and rendering efficiency.
arXiv Detail & Related papers (2020-08-16T10:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.