Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures
- URL: http://arxiv.org/abs/2006.16974v1
- Date: Tue, 30 Jun 2020 17:07:45 GMT
- Title: Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures
- Authors: Jiachen Sun, Yulong Cao, Qi Alfred Chen, Z. Morley Mao
- Abstract summary: LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
- Score: 24.708895480220733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Perception plays a pivotal role in autonomous driving systems, which utilizes
onboard sensors like cameras and LiDARs (Light Detection and Ranging) to assess
surroundings. Recent studies have demonstrated that LiDAR-based perception is
vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in
front of a victim self-driving car by strategically transmitting laser signals
to the victim's LiDAR sensor. However, existing attacks suffer from
effectiveness and generality limitations. In this work, we perform the first
study to explore the general vulnerability of current LiDAR-based perception
architectures and discover that the ignored occlusion patterns in LiDAR point
clouds make self-driving cars vulnerable to spoofing attacks. We construct the
first black-box spoofing attack based on our identified vulnerability, which
universally achieves around 80% mean success rates on all target models. We
perform the first defense study, proposing CARLO to mitigate LiDAR spoofing
attacks. CARLO detects spoofed data by treating ignored occlusion patterns as
invariant physical features, which reduces the mean attack success rate to
5.5%. Meanwhile, we take the first step towards exploring a general
architecture for robust LiDAR-based perception, and propose SVF that embeds the
neglected physical features into end-to-end learning. SVF further reduces the
mean attack success rate to around 2.3%.
Related papers
- ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal
Consistency [11.160041268858773]
Deep neural networks (DNNs) are increasingly integrated into LiDAR-based perception systems for autonomous vehicles (AVs)
We aim to address the challenge of LiDAR spoofing attacks, where attackers inject fake objects into LiDAR data and fool AVs to misinterpret their environment and make erroneous decisions.
We propose ADoPT (Anomaly Detection based on Point-level Temporal consistency), which quantitatively measures temporal consistency across consecutive frames and identifies abnormal objects based on the coherency of point clusters.
In our evaluation using the nuScenes dataset, our algorithm effectively counters various LiDAR spoofing attacks, achieving a low (
arXiv Detail & Related papers (2023-10-23T02:31:31Z) - Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in
Automated Driving System from Appearing Attacks [20.38692153553779]
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds.
Recent works show the adversary can forge non-existent cars in the prediction results with a few fake points.
We propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector.
arXiv Detail & Related papers (2023-03-17T02:20:47Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Security Analysis of Camera-LiDAR Semantic-Level Fusion Against
Black-Box Attacks on Autonomous Vehicles [6.477833151094911]
Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable to spoofing attacks.
We perform the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context.
We find that semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum attacks with between 70% and 90% success against target models.
arXiv Detail & Related papers (2021-06-13T21:59:19Z) - Fooling LiDAR Perception via Adversarial Trajectory Perturbation [13.337443990751495]
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions.
Could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation?
We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough.
arXiv Detail & Related papers (2021-03-29T04:34:31Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.