Security Analysis of Camera-LiDAR Semantic-Level Fusion Against
Black-Box Attacks on Autonomous Vehicles
- URL: http://arxiv.org/abs/2106.07098v1
- Date: Sun, 13 Jun 2021 21:59:19 GMT
- Title: Security Analysis of Camera-LiDAR Semantic-Level Fusion Against
Black-Box Attacks on Autonomous Vehicles
- Authors: R. Spencer Hallyburton, Yupei Liu, Miroslav Pajic
- Abstract summary: Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable to spoofing attacks.
We perform the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context.
We find that semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum attacks with between 70% and 90% success against target models.
- Score: 6.477833151094911
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To enable safe and reliable decision-making, autonomous vehicles (AVs) feed
sensor data to perception algorithms to understand the environment. Sensor
fusion, and particularly semantic fusion, with multi-frame tracking is becoming
increasingly popular for detecting 3D objects. Recently, it was shown that
LiDAR-based perception built on deep neural networks is vulnerable to LiDAR
spoofing attacks. Thus, in this work, we perform the first analysis of
camera-LiDAR fusion under spoofing attacks and the first security analysis of
semantic fusion in any AV context. We find first that fusion is more successful
than existing defenses at guarding against naive spoofing. However, we then
define the frustum attack as a new class of attacks on AVs and find that
semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum
attacks with between 70% and 90% success against target models. Importantly,
the attacker needs less than 20 random spoof points on average for successful
attacks - an order of magnitude less than established maximum capability.
Finally, we are the first to analyze the longitudinal impact of perception
attacks by showing the impact of multi-frame attacks.
Related papers
- Meta Invariance Defense Towards Generalizable Robustness to Unknown Adversarial Attacks [62.036798488144306]
Current defense mainly focuses on the known attacks, but the adversarial robustness to the unknown attacks is seriously overlooked.
We propose an attack-agnostic defense method named Meta Invariance Defense (MID)
We show that MID simultaneously achieves robustness to the imperceptible adversarial perturbations in high-level image classification and attack-suppression in low-level robust image regeneration.
arXiv Detail & Related papers (2024-04-04T10:10:38Z) - ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal
Consistency [11.160041268858773]
Deep neural networks (DNNs) are increasingly integrated into LiDAR-based perception systems for autonomous vehicles (AVs)
We aim to address the challenge of LiDAR spoofing attacks, where attackers inject fake objects into LiDAR data and fool AVs to misinterpret their environment and make erroneous decisions.
We propose ADoPT (Anomaly Detection based on Point-level Temporal consistency), which quantitatively measures temporal consistency across consecutive frames and identifies abnormal objects based on the coherency of point clusters.
In our evaluation using the nuScenes dataset, our algorithm effectively counters various LiDAR spoofing attacks, achieving a low (
arXiv Detail & Related papers (2023-10-23T02:31:31Z) - Fusion is Not Enough: Single Modal Attacks on Fusion Models for 3D
Object Detection [33.0406308223244]
We propose an attack framework that targets advanced camera-LiDAR fusion-based 3D object detection models through camera-only adversarial attacks.
Our approach employs a two-stage optimization-based strategy that first thoroughly evaluates vulnerable image areas under adversarial attacks.
arXiv Detail & Related papers (2023-04-28T03:39:00Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.