FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack
- URL: http://arxiv.org/abs/2109.07193v1
- Date: Wed, 15 Sep 2021 10:17:12 GMT
- Title: FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack
- Authors: DonghuaWang, Tingsong Jiang, Jialiang Sun, Weien Zhou, Xiaoya Zhang,
Zhiqiang Gong, Wen Yao and Xiaoqian Chen
- Abstract summary: We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
- Score: 5.476797414272598
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Physical adversarial attacks in object detection have attracted increasing
attention. However, most previous works focus on hiding the objects from the
detector by generating an individual adversarial patch, which only covers the
planar part of the vehicle's surface and fails to attack the detector in
physical scenarios for multi-view, long-distance and partially occluded
objects. To bridge the gap between digital attacks and physical attacks, we
exploit the full 3D vehicle surface to propose a robust Full-coverage
Camouflage Attack (FCA) to fool detectors. Specifically, we first try rendering
the non-planar camouflage texture over the full vehicle surface. To mimic the
real-world environment conditions, we then introduce a transformation function
to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
Finally, we design an efficient loss function to optimize the camouflage
texture. Experiments show that the full-coverage camouflage attack can not only
outperform state-of-the-art methods under various test cases but also
generalize to different environments, vehicles, and object detectors.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal
and Robust Vehicle Evasion [3.5049174854580842]
We present Adrial Camouflage for Transferable and Intensive Vehicle Evasion (ACTIVE), a state-of-the-art physical camouflage attack framework.
ACTIVE generates universal and robust adversarial camouflage capable of concealing any 3D vehicle from detectors.
Our experiments on 15 different models show that consistently outperforms existing works on various public detectors.
arXiv Detail & Related papers (2023-08-14T08:52:41Z) - CamoFormer: Masked Separable Attention for Camouflaged Object Detection [94.2870722866853]
We present a simple masked separable attention (MSA) for camouflaged object detection.
We first separate the multi-head self-attention into three parts, which are responsible for distinguishing the camouflaged objects from the background using different mask strategies.
We propose to capture high-resolution semantic representations progressively based on a simple top-down decoder with the proposed MSA to attain precise segmentation results.
arXiv Detail & Related papers (2022-12-10T10:03:27Z) - DTA: Physical Camouflage Attacks using Differentiable Transformation
Network [0.4215938932388722]
We propose a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models.
Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistics and the benefit of white-box access.
Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models.
arXiv Detail & Related papers (2022-03-18T10:15:02Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Learning Transferable 3D Adversarial Cloaks for Deep Trained Detectors [72.7633556669675]
This paper presents a novel patch-based adversarial attack pipeline that trains adversarial patches on 3D human meshes.
Unlike existing adversarial patches, our new 3D adversarial patch is shown to fool state-of-the-art deep object detectors robustly under varying views.
arXiv Detail & Related papers (2021-04-22T14:36:08Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.