CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors
- URL: http://arxiv.org/abs/2409.17963v1
- Date: Thu, 26 Sep 2024 15:41:18 GMT
- Title: CNCA: Toward Customizable and Natural Generation of Adversarial Camouflage for Vehicle Detectors
- Authors: Linye Lyu, Jiawei Zhou, Daojing He, Yu Li,
- Abstract summary: We propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model.
Our method can generate natural and customizable adversarial camouflage while maintaining high attack performance.
- Score: 19.334642862951537
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Prior works on physical adversarial camouflage against vehicle detectors mainly focus on the effectiveness and robustness of the attack. The current most successful methods optimize 3D vehicle texture at a pixel level. However, this results in conspicuous and attention-grabbing patterns in the generated camouflage, which humans can easily identify. To address this issue, we propose a Customizable and Natural Camouflage Attack (CNCA) method by leveraging an off-the-shelf pre-trained diffusion model. By sampling the optimal texture image from the diffusion model with a user-specific text prompt, our method can generate natural and customizable adversarial camouflage while maintaining high attack performance. With extensive experiments on the digital and physical worlds and user studies, the results demonstrate that our proposed method can generate significantly more natural-looking camouflage than the state-of-the-art baselines while achieving competitive attack performance. Our code is available at \href{https://anonymous.4open.science/r/CNCA-1D54}{https://anonymous.4open.science/r/CNCA-1D54}
Related papers
- RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation [19.334642862951537]
We propose a robust and accurate camouflage generation method, namely RAUCA.
The core of RAUCA is a novel neural rendering component, Neural Renderer Plus (NRP), which can accurately project vehicle textures and render images with environmental characteristics such as lighting and weather.
Experimental results on six popular object detectors show that RAUCA consistently outperforms existing methods in both simulation and real-world settings.
arXiv Detail & Related papers (2024-02-24T16:50:10Z) - Flexible Physical Camouflage Generation Based on a Differential Approach [6.645986533504748]
This study introduces a novel approach to neural rendering, specifically tailored for adversarial camouflage.
Our method, named FPA, goes beyond traditional techniques by faithfully simulating lighting conditions and material variations.
Our findings highlight the versatility and efficacy of the FPA approach in adversarial camouflage applications.
arXiv Detail & Related papers (2024-02-21T07:15:16Z) - The Making and Breaking of Camouflage [95.37449361842656]
We show that camouflage can be measured by the similarity between background and foreground features and boundary visibility.
We incorporate the proposed camouflage score into a generative model as an auxiliary loss and show that effective camouflage images or videos can be synthesised in a scalable manner.
arXiv Detail & Related papers (2023-09-07T17:58:05Z) - CamDiff: Camouflage Image Augmentation via Diffusion Model [83.35960536063857]
CamDiff is a novel approach to synthesize salient objects in camouflaged scenes.
We leverage the latent diffusion model to synthesize salient objects in camouflaged scenes.
Our approach enables flexible editing and efficient large-scale dataset generation at a low cost.
arXiv Detail & Related papers (2023-04-11T19:37:47Z) - Natural Color Fool: Towards Boosting Black-box Unrestricted Attacks [68.48271396073156]
We propose a novel Natural Color Fool (NCF) to boost transferability of adversarial examples without damaging image quality.
Results show that our NCF can outperform state-of-the-art approaches by 15.0%$sim$32.9% for fooling normally trained models and 10.0%$sim$25.3% for evading defense methods.
arXiv Detail & Related papers (2022-10-05T06:24:16Z) - Adversarial Camouflage for Node Injection Attack on Graphs [64.5888846198005]
Node injection attacks on Graph Neural Networks (GNNs) have received increasing attention recently, due to their ability to degrade GNN performance with high attack success rates.
Our study indicates that these attacks often fail in practical scenarios, since defense/detection methods can easily identify and remove the injected nodes.
To address this, we devote to camouflage node injection attack, making injected nodes appear normal and imperceptible to defense/detection methods.
arXiv Detail & Related papers (2022-08-03T02:48:23Z) - Towards Deeper Understanding of Camouflaged Object Detection [64.81987999832032]
We argue that the binary segmentation setting fails to fully understand the concept of camouflage.
We present the first triple-task learning framework to simultaneously localize, segment and rank camouflaged objects.
arXiv Detail & Related papers (2022-05-23T14:26:18Z) - FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack [5.476797414272598]
We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
arXiv Detail & Related papers (2021-09-15T10:17:12Z) - Cooling-Shrinking Attack: Blinding the Tracker with Imperceptible Noises [87.53808756910452]
A cooling-shrinking attack method is proposed to deceive state-of-the-art SiameseRPN-based trackers.
Our method has good transferability and is able to deceive other top-performance trackers such as DaSiamRPN, DaSiamRPN-UpdateNet, and DiMP.
arXiv Detail & Related papers (2020-03-21T07:13:40Z) - Adversarial Camouflage: Hiding Physical-World Attacks with Natural
Styles [40.57099683047126]
We propose a novel approach to craft and camouflage physical-world adversarial examples into natural styles that appear legitimate to human observers.
emphAdvCam can also be used to protect private information from being detected by deep learning systems.
arXiv Detail & Related papers (2020-03-08T07:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.