Rolling Colors: Adversarial Laser Exploits against Traffic Light
Recognition
- URL: http://arxiv.org/abs/2204.02675v1
- Date: Wed, 6 Apr 2022 08:57:25 GMT
- Title: Rolling Colors: Adversarial Laser Exploits against Traffic Light
Recognition
- Authors: Chen Yan, Zhijian Xu, Zhanyuan Yin, Xiaoyu Ji, Wenyuan Xu
- Abstract summary: We study the feasibility of fooling traffic light recognition mechanisms by shedding laser interference on the camera.
By exploiting the rolling shutter of CMOS sensors, we inject a color stripe overlapped on the traffic light in the image, which can cause a red light to be recognized as a green light or vice versa.
Our evaluation reports a maximum success rate of 30% and 86.25% for Red-to-Green and Green-to-Red attacks.
- Score: 18.271698365826552
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Traffic light recognition is essential for fully autonomous driving in urban
areas. In this paper, we investigate the feasibility of fooling traffic light
recognition mechanisms by shedding laser interference on the camera. By
exploiting the rolling shutter of CMOS sensors, we manage to inject a color
stripe overlapped on the traffic light in the image, which can cause a red
light to be recognized as a green light or vice versa. To increase the success
rate, we design an optimization method to search for effective laser parameters
based on empirical models of laser interference. Our evaluation in emulated and
real-world setups on 2 state-of-the-art recognition systems and 5 cameras
reports a maximum success rate of 30% and 86.25% for Red-to-Green and
Green-to-Red attacks. We observe that the attack is effective in continuous
frames from more than 40 meters away against a moving vehicle, which may cause
end-to-end impacts on self-driving such as running a red light or emergency
stop. To mitigate the threat, we propose redesigning the rolling shutter
mechanism.
Related papers
- GreenEye: Development of Real-Time Traffic Signal Recognition System for Visual Impairments [0.6216023343793144]
The GreenEye system recognizes the traffic signals' color and tells the time left for pedestrians to cross the crosswalk in real-time.
The data imbalance caused low precision; extra labeling and database formation were performed to stabilize the number of images between different classes.
arXiv Detail & Related papers (2024-10-21T06:27:22Z) - Invisible Optical Adversarial Stripes on Traffic Sign against Autonomous Vehicles [10.17957244747775]
This paper presents an attack that uses light-emitting diodes and exploits the camera's rolling shutter effect to mislead traffic sign recognition.
The attack is stealthy because the stripes on the traffic sign are invisible to human.
We discuss the countermeasures at the levels of camera sensor, perception model, and autonomous driving system.
arXiv Detail & Related papers (2024-07-10T09:55:31Z) - LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions [61.87108000328186]
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering.
Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions.
This paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil.
arXiv Detail & Related papers (2024-06-03T02:12:27Z) - Infrared Adversarial Car Stickers [18.913361704019973]
We propose a physical attack method against infrared detectors based on 3D modeling, which is applied to a real car.
The goal is to design a set of infrared adversarial stickers to make cars invisible to infrared detectors at various viewing angles, distances, and scenes.
arXiv Detail & Related papers (2024-05-16T09:26:19Z) - NiteDR: Nighttime Image De-Raining with Cross-View Sensor Cooperative Learning for Dynamic Driving Scenes [49.92839157944134]
In nighttime driving scenes, insufficient and uneven lighting shrouds the scenes in darkness, resulting degradation of image quality and visibility.
We develop an image de-raining framework tailored for rainy nighttime driving scenes.
It aims to remove rain artifacts, enrich scene representation, and restore useful information.
arXiv Detail & Related papers (2024-02-28T09:02:33Z) - Invisible Reflections: Leveraging Infrared Laser Reflections to Target
Traffic Sign Perception [25.566091959509986]
Road signs indicate locally active rules, such as speed limits and requirements to yield or stop.
Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation.
We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors.
arXiv Detail & Related papers (2024-01-07T21:22:42Z) - aUToLights: A Robust Multi-Camera Traffic Light Detection and Tracking
System [6.191246748708665]
We describe our recently-redesigned traffic light perception system for autonomous vehicles like the University of Toronto's self-driving car, Artemis.
We deploy the YOLOv5 detector for bounding box regression and traffic light classification across multiple cameras and fuse the observations.
Our results show superior performance in challenging real-world scenarios compared to single-frame, single-camera object detection.
arXiv Detail & Related papers (2023-05-15T14:28:34Z) - Lasers to Events: Automatic Extrinsic Calibration of Lidars and Event
Cameras [67.84498757689776]
This paper presents the first direct calibration method between event cameras and lidars.
It removes dependencies on frame-based camera intermediaries and/or highly-accurate hand measurements.
arXiv Detail & Related papers (2022-07-03T11:05:45Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Multi-Modal Fusion Transformer for End-to-End Autonomous Driving [59.60483620730437]
We propose TransFuser, a novel Multi-Modal Fusion Transformer, to integrate image and LiDAR representations using attention.
Our approach achieves state-of-the-art driving performance while reducing collisions by 76% compared to geometry-based fusion.
arXiv Detail & Related papers (2021-04-19T11:48:13Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.