SlowTrack: Increasing the Latency of Camera-based Perception in
Autonomous Driving Using Adversarial Examples
- URL: http://arxiv.org/abs/2312.09520v2
- Date: Tue, 26 Dec 2023 13:02:07 GMT
- Title: SlowTrack: Increasing the Latency of Camera-based Perception in
Autonomous Driving Using Adversarial Examples
- Authors: Chen Ma, Ningfei Wang, Qi Alfred Chen, Chao Shen
- Abstract summary: We propose SlowTrack, a framework for generating adversarial attacks to increase execution time of camera-based AD perception.
Our evaluation results show that the system-level effects can be significantly improved, i.e., the vehicle crash rate of SlowTrack is around 95% on average.
- Score: 29.181660544576406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Autonomous Driving (AD), real-time perception is a critical component
responsible for detecting surrounding objects to ensure safe driving. While
researchers have extensively explored the integrity of AD perception due to its
safety and security implications, the aspect of availability (real-time
performance) or latency has received limited attention. Existing works on
latency-based attack have focused mainly on object detection, i.e., a component
in camera-based AD perception, overlooking the entire camera-based AD
perception, which hinders them to achieve effective system-level effects, such
as vehicle crashes. In this paper, we propose SlowTrack, a novel framework for
generating adversarial attacks to increase the execution time of camera-based
AD perception. We propose a novel two-stage attack strategy along with the
three new loss function designs. Our evaluation is conducted on four popular
camera-based AD perception pipelines, and the results demonstrate that
SlowTrack significantly outperforms existing latency-based attacks while
maintaining comparable imperceptibility levels. Furthermore, we perform the
evaluation on Baidu Apollo, an industry-grade full-stack AD system, and LGSVL,
a production-grade AD simulator, with two scenarios to compare the system-level
effects of SlowTrack and existing attacks. Our evaluation results show that the
system-level effects can be significantly improved, i.e., the vehicle crash
rate of SlowTrack is around 95% on average while existing works only have
around 30%.
Related papers
- ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving [26.669905199110755]
High latency in visual perception components can lead to safety risks, such as vehicle collisions.
We introduce SlowPerception, the first physical-world latency attack against AD perception, via generating projector-based universal perturbations.
Our SlowPerception achieves second-level latency in physical-world settings, with an average latency of 2.5 seconds across different AD perception systems.
arXiv Detail & Related papers (2024-06-09T14:30:18Z) - LanEvil: Benchmarking the Robustness of Lane Detection to Environmental Illusions [61.87108000328186]
Lane detection (LD) is an essential component of autonomous driving systems, providing fundamental functionalities like adaptive cruise control and automated lane centering.
Existing LD benchmarks primarily focus on evaluating common cases, neglecting the robustness of LD models against environmental illusions.
This paper studies the potential threats caused by these environmental illusions to LD and establishes the first comprehensive benchmark LanEvil.
arXiv Detail & Related papers (2024-06-03T02:12:27Z) - Does Physical Adversarial Example Really Matter to Autonomous Driving?
Towards System-Level Effect of Adversarial Object Evasion Attack [39.08524903081768]
In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving.
Physical adversarial object evasion attacks are especially severe in AD.
All existing literature evaluates their attack effect at the targeted AI component level but not at the system level.
We propose SysAdv, a novel system-driven attack design in the AD context.
arXiv Detail & Related papers (2023-08-23T03:40:47Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Recurrent Vision Transformers for Object Detection with Event Cameras [62.27246562304705]
We present Recurrent Vision Transformers (RVTs), a novel backbone for object detection with event cameras.
RVTs can be trained from scratch to reach state-of-the-art performance on event-based object detection.
Our study brings new insights into effective design choices that can be fruitful for research beyond event-based vision.
arXiv Detail & Related papers (2022-12-11T20:28:59Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Streaming Object Detection for 3-D Point Clouds [29.465873948076766]
LiDAR provides a prominent sensory modality that informs many existing perceptual systems.
The latency for perceptual systems based on point cloud data can be dominated by the amount of time for a complete rotational scan.
We show how operating on LiDAR data in its native streaming formulation offers several advantages for self driving object detection.
arXiv Detail & Related papers (2020-05-04T21:55:15Z) - Training-free Monocular 3D Event Detection System for Traffic
Surveillance [93.65240041833319]
Existing event detection systems are mostly learning-based and have achieved convincing performance when a large amount of training data is available.
In real-world scenarios, collecting sufficient labeled training data is expensive and sometimes impossible.
We propose a training-free monocular 3D event detection system for traffic surveillance.
arXiv Detail & Related papers (2020-02-01T04:42:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.