Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial
Examples Against Traffic Sign Recognition Systems
- URL: http://arxiv.org/abs/2201.06192v1
- Date: Mon, 17 Jan 2022 03:24:31 GMT
- Title: Fooling the Eyes of Autonomous Vehicles: Robust Physical Adversarial
Examples Against Traffic Sign Recognition Systems
- Authors: Wei Jia, Zhaojun Lu, Haichun Zhang, Zhenglin Liu, Jie Wang, Gang Qu
- Abstract summary: Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs)
In this paper, we propose a systematic pipeline to generate robust physical AEs against real-world object detectors.
Experiments show that the physical AEs generated from our pipeline are effective and robust when attacking the YOLO v5 based Traffic Sign Recognition system.
- Score: 10.310327880799017
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial Examples (AEs) can deceive Deep Neural Networks (DNNs) and have
received a lot of attention recently. However, majority of the research on AEs
is in the digital domain and the adversarial patches are static, which is very
different from many real-world DNN applications such as Traffic Sign
Recognition (TSR) systems in autonomous vehicles. In TSR systems, object
detectors use DNNs to process streaming video in real time. From the view of
object detectors, the traffic sign`s position and quality of the video are
continuously changing, rendering the digital AEs ineffective in the physical
world.
In this paper, we propose a systematic pipeline to generate robust physical
AEs against real-world object detectors. Robustness is achieved in three ways.
First, we simulate the in-vehicle cameras by extending the distribution of
image transformations with the blur transformation and the resolution
transformation. Second, we design the single and multiple bounding boxes
filters to improve the efficiency of the perturbation training. Third, we
consider four representative attack vectors, namely Hiding Attack, Appearance
Attack, Non-Target Attack and Target Attack.
We perform a comprehensive set of experiments under a variety of
environmental conditions, and considering illuminations in sunny and cloudy
weather as well as at night. The experimental results show that the physical
AEs generated from our pipeline are effective and robust when attacking the
YOLO v5 based TSR system. The attacks have good transferability and can deceive
other state-of-the-art object detectors. We launched HA and NTA on a brand-new
2021 model vehicle. Both attacks are successful in fooling the TSR system,
which could be a life-threatening case for autonomous vehicles. Finally, we
discuss three defense mechanisms based on image preprocessing, AEs detection,
and model enhancing.
Related papers
- Transient Adversarial 3D Projection Attacks on Object Detection in Autonomous Driving [15.516055760190884]
We introduce an adversarial 3D projection attack specifically targeting object detection in autonomous driving scenarios.
Our results demonstrate the effectiveness of the proposed attack in deceiving YOLOv3 and Mask R-CNN in physical settings.
arXiv Detail & Related papers (2024-09-25T22:27:11Z) - Object Detection for Vehicle Dashcams using Transformers [2.3243389656894595]
We propose a novel approach for object detection in dashcams using transformers.
Our system is based on the state-of-the-art DEtection TRansformer (DETR)
Our results show that the use of intelligent automation through transformers can significantly enhance the capabilities of dashcam systems.
arXiv Detail & Related papers (2024-08-28T14:08:24Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Unified Adversarial Patch for Cross-modal Attacks in the Physical World [11.24237636482709]
We propose a unified adversarial patch to fool visible and infrared object detectors at the same time via a single patch.
Considering different imaging mechanisms of visible and infrared sensors, our work focuses on modeling the shapes of adversarial patches.
Results show that our unified patch achieves an Attack Success Rate (ASR) of 73.33% and 69.17%, respectively.
arXiv Detail & Related papers (2023-07-15T17:45:17Z) - DPA: Learning Robust Physical Adversarial Camouflages for Object
Detectors [5.598600329573922]
We propose the Dense Proposals Attack (DPA) to learn robust, physical and targeted adversarial camouflages for detectors.
The camouflages are robust because they remain adversarial when filmed under arbitrary viewpoint and different illumination conditions.
We build a virtual 3D scene using the Unity simulation engine to fairly and reproducibly evaluate different physical attacks.
arXiv Detail & Related papers (2021-09-01T00:18:17Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Robust Attacks on Deep Learning Face Recognition in the Physical World [48.909604306342544]
FaceAdv is a physical-world attack that crafts adversarial stickers to deceive FR systems.
It mainly consists of a sticker generator and a transformer, where the former can craft several stickers with different shapes.
We conduct extensive experiments to evaluate the effectiveness of FaceAdv on attacking 3 typical FR systems.
arXiv Detail & Related papers (2020-11-27T02:24:43Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.