ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal
and Robust Vehicle Evasion
- URL: http://arxiv.org/abs/2308.07009v2
- Date: Wed, 16 Aug 2023 09:47:08 GMT
- Title: ACTIVE: Towards Highly Transferable 3D Physical Camouflage for Universal
and Robust Vehicle Evasion
- Authors: Naufal Suryanto, Yongsu Kim, Harashta Tatimma Larasati, Hyoeun Kang,
Thi-Thu-Huong Le, Yoonyoung Hong, Hunmin Yang, Se-Yoon Oh, Howon Kim
- Abstract summary: We present Adrial Camouflage for Transferable and Intensive Vehicle Evasion (ACTIVE), a state-of-the-art physical camouflage attack framework.
ACTIVE generates universal and robust adversarial camouflage capable of concealing any 3D vehicle from detectors.
Our experiments on 15 different models show that consistently outperforms existing works on various public detectors.
- Score: 3.5049174854580842
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial camouflage has garnered attention for its ability to attack
object detectors from any viewpoint by covering the entire object's surface.
However, universality and robustness in existing methods often fall short as
the transferability aspect is often overlooked, thus restricting their
application only to a specific target with limited performance. To address
these challenges, we present Adversarial Camouflage for Transferable and
Intensive Vehicle Evasion (ACTIVE), a state-of-the-art physical camouflage
attack framework designed to generate universal and robust adversarial
camouflage capable of concealing any 3D vehicle from detectors. Our framework
incorporates innovative techniques to enhance universality and robustness,
including a refined texture rendering that enables common texture application
to different vehicles without being constrained to a specific texture map, a
novel stealth loss that renders the vehicle undetectable, and a smooth and
camouflage loss to enhance the naturalness of the adversarial camouflage. Our
extensive experiments on 15 different models show that ACTIVE consistently
outperforms existing works on various public detectors, including the latest
YOLOv7. Notably, our universality evaluations reveal promising transferability
to other vehicle classes, tasks (segmentation models), and the real world, not
just other vehicles.
Related papers
- TACO: Adversarial Camouflage Optimization on Trucks to Fool Object Detectors [0.0]
Adversarial attacks threaten reliability of machine learning models in critical applications like autonomous vehicles and defense systems.
We present Truck Adversarial Camouflage Optimization (TACO), a novel framework that generates adversarial camouflage patterns on 3D vehicle models.
We show that TACO significantly degrades YOLOv8's detection performance, achieving an AP@0.5 of 0.0099 on unseen test data.
arXiv Detail & Related papers (2024-10-28T18:40:06Z) - RAUCA: A Novel Physical Adversarial Attack on Vehicle Detectors via Robust and Accurate Camouflage Generation [19.334642862951537]
We propose a robust and accurate camouflage generation method, namely RAUCA.
The core of RAUCA is a novel neural rendering component, Neural Renderer Plus (NRP), which can accurately project vehicle textures and render images with environmental characteristics such as lighting and weather.
Experimental results on six popular object detectors show that RAUCA consistently outperforms existing methods in both simulation and real-world settings.
arXiv Detail & Related papers (2024-02-24T16:50:10Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Camouflaged Image Synthesis Is All You Need to Boost Camouflaged
Detection [65.8867003376637]
We propose a framework for synthesizing camouflage data to enhance the detection of camouflaged objects in natural scenes.
Our approach employs a generative model to produce realistic camouflage images, which can be used to train existing object detection models.
Our framework outperforms the current state-of-the-art method on three datasets.
arXiv Detail & Related papers (2023-08-13T06:55:05Z) - DTA: Physical Camouflage Attacks using Differentiable Transformation
Network [0.4215938932388722]
We propose a framework for generating a robust physical adversarial pattern on a target object to camouflage it against object detection models.
Using our attack framework, an adversary can gain both the advantages of the legacy photo-realistics and the benefit of white-box access.
Our experiments show that our camouflaged 3D vehicles can successfully evade state-of-the-art object detection models.
arXiv Detail & Related papers (2022-03-18T10:15:02Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - FCA: Learning a 3D Full-coverage Vehicle Camouflage for Multi-view
Physical Adversarial Attack [5.476797414272598]
We propose a robust Full-coverage Camouflage Attack (FCA) to fool detectors.
Specifically, we first try rendering the non-planar camouflage texture over the full vehicle surface.
We then introduce a transformation function to transfer the rendered camouflaged vehicle into a photo-realistic scenario.
arXiv Detail & Related papers (2021-09-15T10:17:12Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Temporally-Transferable Perturbations: Efficient, One-Shot Adversarial
Attacks for Online Visual Object Trackers [81.90113217334424]
We propose a framework to generate a single temporally transferable adversarial perturbation from the object template image only.
This perturbation can then be added to every search image, which comes at virtually no cost, and still, successfully fool the tracker.
arXiv Detail & Related papers (2020-12-30T15:05:53Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.