Flexible Physical Camouflage Generation Based on a Differential Approach
- URL: http://arxiv.org/abs/2402.13575v1
- Date: Wed, 21 Feb 2024 07:15:16 GMT
- Title: Flexible Physical Camouflage Generation Based on a Differential Approach
- Authors: Yang Li, Wenyi Tan, Chenxing Zhao, Shuangju Zhou, Xinkai Liang, and
Quan Pan
- Abstract summary: This study introduces a novel approach to neural rendering, specifically tailored for adversarial camouflage.
Our method, named FPA, goes beyond traditional techniques by faithfully simulating lighting conditions and material variations.
Our findings highlight the versatility and efficacy of the FPA approach in adversarial camouflage applications.
- Score: 6.645986533504748
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This study introduces a novel approach to neural rendering, specifically
tailored for adversarial camouflage, within an extensive 3D rendering
framework. Our method, named FPA, goes beyond traditional techniques by
faithfully simulating lighting conditions and material variations, ensuring a
nuanced and realistic representation of textures on a 3D target. To achieve
this, we employ a generative approach that learns adversarial patterns from a
diffusion model. This involves incorporating a specially designed adversarial
loss and covert constraint loss to guarantee the adversarial and covert nature
of the camouflage in the physical world. Furthermore, we showcase the
effectiveness of the proposed camouflage in sticker mode, demonstrating its
ability to cover the target without compromising adversarial information.
Through empirical and physical experiments, FPA exhibits strong performance in
terms of attack success rate and transferability. Additionally, the designed
sticker-mode camouflage, coupled with a concealment constraint, adapts to the
environment, yielding diverse styles of texture. Our findings highlight the
versatility and efficacy of the FPA approach in adversarial camouflage
applications.
Related papers
- CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors [19.334642862951537]
We propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model.
Our method can generate natural and customizable adversarial camouflage while maintaining high attack performance.
arXiv Detail & Related papers (2024-09-26T15:41:18Z) - RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation [19.334642862951537]
We propose a robust and accurate camouflage generation method, namely RAUCA.
The core of RAUCA is a novel neural rendering component, Neural Renderer Plus (NRP), which can accurately project vehicle textures and render images with environmental characteristics such as lighting and weather.
Experimental results on six popular object detectors show that RAUCA consistently outperforms existing methods in both simulation and real-world settings.
arXiv Detail & Related papers (2024-02-24T16:50:10Z) - Adv-Diffusion: Imperceptible Adversarial Face Identity Attack via Latent
Diffusion Model [61.53213964333474]
We propose a unified framework Adv-Diffusion that can generate imperceptible adversarial identity perturbations in the latent space but not the raw pixel space.
Specifically, we propose the identity-sensitive conditioned diffusion generative model to generate semantic perturbations in the surroundings.
The designed adaptive strength-based adversarial perturbation algorithm can ensure both attack transferability and stealthiness.
arXiv Detail & Related papers (2023-12-18T15:25:23Z) - WarpDiffusion: Efficient Diffusion Model for High-Fidelity Virtual
Try-on [81.15988741258683]
Image-based Virtual Try-On (VITON) aims to transfer an in-shop garment image onto a target person.
Current methods often overlook the synthesis quality around the garment-skin boundary and realistic effects like wrinkles and shadows on the warped garments.
We propose WarpDiffusion, which bridges the warping-based and diffusion-based paradigms via a novel informative and local garment feature attention mechanism.
arXiv Detail & Related papers (2023-12-06T18:34:32Z) - NeRFTAP: Enhancing Transferability of Adversarial Patches on Face
Recognition using Neural Radiance Fields [15.823538329365348]
We propose a novel adversarial attack method that considers both the transferability to the FR model and the victim's face image.
We generate new view face images for the source and target subjects to enhance transferability of adversarial patches.
Our work provides valuable insights for enhancing the robustness of FR systems in practical adversarial settings.
arXiv Detail & Related papers (2023-11-29T03:17:14Z) - The Making and Breaking of Camouflage [95.37449361842656]
We show that camouflage can be measured by the similarity between background and foreground features and boundary visibility.
We incorporate the proposed camouflage score into a generative model as an auxiliary loss and show that effective camouflage images or videos can be synthesised in a scalable manner.
arXiv Detail & Related papers (2023-09-07T17:58:05Z) - CamoDiffusion: Camouflaged Object Detection via Conditional Diffusion
Models [72.93652777646233]
Camouflaged Object Detection (COD) is a challenging task in computer vision due to the high similarity between camouflaged objects and their surroundings.
We propose a new paradigm that treats COD as a conditional mask-generation task leveraging diffusion models.
Our method, dubbed CamoDiffusion, employs the denoising process of diffusion models to iteratively reduce the noise of the mask.
arXiv Detail & Related papers (2023-05-29T07:49:44Z) - CamDiff: Camouflage Image Augmentation via Diffusion Model [83.35960536063857]
CamDiff is a novel approach to synthesize salient objects in camouflaged scenes.
We leverage the latent diffusion model to synthesize salient objects in camouflaged scenes.
Our approach enables flexible editing and efficient large-scale dataset generation at a low cost.
arXiv Detail & Related papers (2023-04-11T19:37:47Z) - DTA: Physical Camouflage Attacks using Differentiable Transformation
Network [0.4215938932388722]
We propose a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models.
Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistics and the benefit of white-box access.
Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models.
arXiv Detail & Related papers (2022-03-18T10:15:02Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.