Invisible Reflections: Leveraging Infrared Laser Reflections to Target
Traffic Sign Perception
- URL: http://arxiv.org/abs/2401.03582v1
- Date: Sun, 7 Jan 2024 21:22:42 GMT
- Title: Invisible Reflections: Leveraging Infrared Laser Reflections to Target
Traffic Sign Perception
- Authors: Takami Sato, Sri Hrushikesh Varma Bhupathiraju, Michael Clifford,
Takeshi Sugawara, Qi Alfred Chen, Sara Rampazzi
- Abstract summary: Road signs indicate locally active rules, such as speed limits and requirements to yield or stop.
Recent research has demonstrated attacks, such as adding stickers or projected colored patches to signs, that cause CAV misinterpretation.
We have developed an effective physical-world attack that leverages the sensitivity of filterless image sensors.
- Score: 25.566091959509986
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: All vehicles must follow the rules that govern traffic behavior, regardless
of whether the vehicles are human-driven or Connected Autonomous Vehicles
(CAVs). Road signs indicate locally active rules, such as speed limits and
requirements to yield or stop. Recent research has demonstrated attacks, such
as adding stickers or projected colored patches to signs, that cause CAV
misinterpretation, resulting in potential safety issues. Humans can see and
potentially defend against these attacks. But humans can not detect what they
can not observe. We have developed an effective physical-world attack that
leverages the sensitivity of filterless image sensors and the properties of
Infrared Laser Reflections (ILRs), which are invisible to humans. The attack is
designed to affect CAV cameras and perception, undermining traffic sign
recognition by inducing misclassification. In this work, we formulate the
threat model and requirements for an ILR-based traffic sign perception attack
to succeed. We evaluate the effectiveness of the ILR attack with real-world
experiments against two major traffic sign recognition architectures on four
IR-sensitive cameras. Our black-box optimization methodology allows the attack
to achieve up to a 100% attack success rate in indoor, static scenarios and a
>80.5% attack success rate in our outdoor, moving vehicle scenarios. We find
the latest state-of-the-art certifiable defense is ineffective against ILR
attacks as it mis-certifies >33.5% of cases. To address this, we propose a
detection strategy based on the physical properties of IR laser reflections
which can detect 96% of ILR attacks.
Related papers
- Discovering New Shadow Patterns for Black-Box Attacks on Lane Detection of Autonomous Vehicles [2.5539742994571037]
This paper introduces a novel approach to generate physical-world adversarial examples (AEs)
negative shadows: deceptive patterns of light on the road created by strategically blocking sunlight, which then cast artificial lane-like patterns.
Using a 20-meter negative shadow, it can direct a vehicle off-road with a 100% violation rate at speeds over 10 mph.
Other attack scenarios, such as causing collisions, can be performed with at least 30 meters of negative shadow, achieving a 60-100% success rate.
arXiv Detail & Related papers (2024-09-26T19:43:52Z) - Invisible Optical Adversarial Stripes on Traffic Sign against Autonomous Vehicles [10.17957244747775]
This paper presents an attack that uses light-emitting diodes and exploits the camera's rolling shutter effect to mislead traffic sign recognition.
The attack is stealthy because the stripes on the traffic sign are invisible to human.
We discuss the countermeasures at the levels of camera sensor, perception model, and autonomous driving system.
arXiv Detail & Related papers (2024-07-10T09:55:31Z) - TPatch: A Triggered Physical Adversarial Patch [19.768494127237393]
We propose TPatch, a physical adversarial patch triggered by acoustic signals.
To avoid the suspicion of human drivers, we propose a content-based camouflage method and an attack enhancement method to strengthen it.
arXiv Detail & Related papers (2023-12-30T06:06:01Z) - Untargeted Backdoor Attack against Object Detection [69.63097724439886]
We design a poison-only backdoor attack in an untargeted manner, based on task characteristics.
We show that, once the backdoor is embedded into the target model by our attack, it can trick the model to lose detection of any object stamped with our trigger patterns.
arXiv Detail & Related papers (2022-11-02T17:05:45Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Rolling Colors: Adversarial Laser Exploits against Traffic Light
Recognition [18.271698365826552]
We study the feasibility of fooling traffic light recognition mechanisms by shedding laser interference on the camera.
By exploiting the rolling shutter of CMOS sensors, we inject a color stripe overlapped on the traffic light in the image, which can cause a red light to be recognized as a green light or vice versa.
Our evaluation reports a maximum success rate of 30% and 86.25% for Red-to-Green and Green-to-Red attacks.
arXiv Detail & Related papers (2022-04-06T08:57:25Z) - Shadows can be Dangerous: Stealthy and Effective Physical-world
Adversarial Attack by Natural Phenomenon [79.33449311057088]
We study a new type of optical adversarial examples, in which the perturbations are generated by a very common natural phenomenon, shadow.
We extensively evaluate the effectiveness of this new attack on both simulated and real-world environments.
arXiv Detail & Related papers (2022-03-08T02:40:18Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.