TPatch: A Triggered Physical Adversarial Patch
- URL: http://arxiv.org/abs/2401.00148v1
- Date: Sat, 30 Dec 2023 06:06:01 GMT
- Title: TPatch: A Triggered Physical Adversarial Patch
- Authors: Wenjun Zhu, Xiaoyu Ji, Yushi Cheng, Shibo Zhang, Wenyuan Xu
- Abstract summary: We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
- Score: 19.768494127237393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles increasingly utilize the vision-based perception module
to acquire information about driving environments and detect obstacles. Correct
detection and classification are important to ensure safe driving decisions.
Existing works have demonstrated the feasibility of fooling the perception
models such as object detectors and image classifiers with printed adversarial
patches. However, most of them are indiscriminately offensive to every passing
autonomous vehicle. In this paper, we propose TPatch, a physical adversarial
patch triggered by acoustic signals. Unlike other adversarial patches, TPatch
remains benign under normal circumstances but can be triggered to launch a
hiding, creating or altering attack by a designed distortion introduced by
signal injection attacks towards cameras. To avoid the suspicion of human
drivers and make the attack practical and robust in the real world, we propose
a content-based camouflage method and an attack robustness enhancement method
to strengthen it. Evaluations with three object detectors, YOLO V3/V5 and
Faster R-CNN, and eight image classifiers demonstrate the effectiveness of
TPatch in both the simulation and the real world. We also discuss possible
defenses at the sensor, algorithm, and system levels.
Related papers
- STOP! Camera Spoofing via the in-Vehicle IP Network [4.14360329494344]
We create an attack tool that exploits the GigE Vision protocol.
We then analyze two classes of passive anomaly detectors to identify such attacks.
We propose a novel class of active defense mechanisms that randomly adjust camera parameters during the video transmission.
arXiv Detail & Related papers (2024-10-07T18:30:22Z) - Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial
Examples Against Traffic Sign Recognition Systems [10.310327880799017]
Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs)
In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors.
Experiments show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based Traffic Sign Recognition system.
arXiv Detail & Related papers (2022-01-17T03:24:31Z) - Segment and Complete: Defending Object Detectors against Adversarial
Patch Attacks with Robust Patch Detection [142.24869736769432]
Adversarial patch attacks pose a serious threat to state-of-the-art object detectors.
We propose Segment and Complete defense (SAC), a framework for defending object detectors against patch attacks.
We show SAC can significantly reduce the targeted attack success rate of physical patch attacks.
arXiv Detail & Related papers (2021-12-08T19:18:48Z) - You Cannot Easily Catch Me: A Low-Detectable Adversarial Patch for
Object Detectors [12.946967210071032]
Adversarial patches can fool facial recognition systems, surveillance systems and self-driving cars.
Most existing adversarial patches can be outwitted, disabled and rejected by an adversarial patch detector.
We present a novel approach, a Low-Detectable Adversarial Patch, which attacks an object detector with texture-consistent adversarial patches.
arXiv Detail & Related papers (2021-09-30T14:47:29Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Dynamic Adversarial Patch for Evading Object Detection Models [47.32228513808444]
We present an innovative attack method against object detectors applied in a real-world setup.
Our method uses dynamic adversarial patches which are placed at multiple predetermined locations on a target object.
We improved the attack by generating patches that consider the semantic distance between the target object and its classification.
arXiv Detail & Related papers (2020-10-25T08:55:40Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.