Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks
- URL: http://arxiv.org/abs/2106.09249v1
- Date: Thu, 17 Jun 2021 05:11:07 GMT
- Title: Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks
- Authors: Yulong Cao*, Ningfei Wang*, Chaowei Xiao*, Dawei Yang*, Jin Fang,
Ruigang Yang, Qi Alfred Chen, Mingyan Liu, Bo Li (*co-first authors)
- Abstract summary: We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
- Score: 62.923992740383966
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In Autonomous Driving (AD) systems, perception is both security and safety
critical. Despite various prior studies on its security issues, all of them
only consider attacks on camera- or LiDAR-based AD perception alone. However,
production AD systems today predominantly adopt a Multi-Sensor Fusion (MSF)
based design, which in principle can be more robust against these attacks under
the assumption that not all fusion sources are (or can be) attacked at the same
time. In this paper, we present the first study of security issues of MSF-based
perception in AD systems. We directly challenge the basic MSF design assumption
above by exploring the possibility of attacking all fusion sources
simultaneously. This allows us for the first time to understand how much
security guarantee MSF can fundamentally provide as a general defense strategy
for AD perception.
We formulate the attack as an optimization problem to generate a
physically-realizable, adversarial 3D-printed object that misleads an AD system
to fail in detecting it and thus crash into it. We propose a novel attack
pipeline that addresses two main design challenges: (1) non-differentiable
target camera and LiDAR sensing systems, and (2) non-differentiable cell-level
aggregated features popularly used in LiDAR-based AD perception. We evaluate
our attack on MSF included in representative open-source industry-grade AD
systems in real-world driving scenarios. Our results show that the attack
achieves over 90% success rate across different object types and MSF. Our
attack is also found stealthy, robust to victim positions, transferable across
MSF algorithms, and physical-world realizable after being 3D-printed and
captured by LiDAR and camera devices. To concretely assess the end-to-end
safety impact, we further perform simulation evaluation and show that it can
cause a 100% vehicle collision rate for an industry-grade AD system.
Related papers
- Navigating Threats: A Survey of Physical Adversarial Attacks on LiDAR Perception Systems in Autonomous Vehicles [4.4538254463902645]
LiDAR systems are vulnerable to adversarial attacks, which pose significant challenges to the safety and robustness of autonomous vehicles.
This survey presents a review of the current research landscape on physical adversarial attacks targeting LiDAR-based perception systems.
We identify critical challenges and highlight gaps in existing attacks for LiDAR-based systems.
arXiv Detail & Related papers (2024-09-30T15:50:36Z) - Can We Trust Embodied Agents? Exploring Backdoor Attacks against Embodied LLM-based Decision-Making Systems [27.316115171846953]
Large Language Models (LLMs) have shown significant promise in real-world decision-making tasks for embodied AI.
LLMs are fine-tuned to leverage their inherent common sense and reasoning abilities while being tailored to specific applications.
This fine-tuning process introduces considerable safety and security vulnerabilities, especially in safety-critical cyber-physical systems.
arXiv Detail & Related papers (2024-05-27T17:59:43Z) - Does Physical Adversarial Example Really Matter to Autonomous Driving?
Towards System-Level Effect of Adversarial Object Evasion Attack [39.08524903081768]
In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving.
Physical adversarial object evasion attacks are especially severe in AD.
All existing literature evaluates their attack effect at the targeted AI component level but not at the system level.
We propose SysAdv, a novel system-driven attack design in the AD context.
arXiv Detail & Related papers (2023-08-23T03:40:47Z) - When Authentication Is Not Enough: On the Security of Behavioral-Based Driver Authentication Systems [53.2306792009435]
We develop two lightweight driver authentication systems based on Random Forest and Recurrent Neural Network architectures.
We are the first to propose attacks against these systems by developing two novel evasion attacks, SMARTCAN and GANCAN.
Through our contributions, we aid practitioners in safely adopting these systems, help reduce car thefts, and enhance driver security.
arXiv Detail & Related papers (2023-06-09T14:33:26Z) - Security Analysis of Camera-LiDAR Semantic-Level Fusion Against
Black-Box Attacks on Autonomous Vehicles [6.477833151094911]
Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable to spoofing attacks.
We perform the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context.
We find that semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum attacks with between 70% and 90% success against target models.
arXiv Detail & Related papers (2021-06-13T21:59:19Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.