LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken
Assumptions, and New Attack Strategies
- URL: http://arxiv.org/abs/2303.10555v2
- Date: Wed, 7 Feb 2024 21:28:45 GMT
- Title: LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken
Assumptions, and New Attack Strategies
- Authors: Takami Sato, Yuki Hayakawa, Ryo Suzuki, Yohsuke Shiiki, Kentaro
Yoshioka, Qi Alfred Chen
- Abstract summary: A recent line of research finds that one can manipulate the LiDAR point cloud and fool object detectors by firing malicious lasers against LiDAR.
We conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs.
We uncover a total of 15 novel findings, including not only completely new ones due to the measurement angle novelty, but also many that can directly challenge the latest understandings in this problem space.
- Score: 26.9731228822657
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: LiDAR (Light Detection And Ranging) is an indispensable sensor for precise
long- and wide-range 3D sensing, which directly benefited the recent rapid
deployment of autonomous driving (AD). Meanwhile, such a safety-critical
application strongly motivates its security research. A recent line of research
finds that one can manipulate the LiDAR point cloud and fool object detectors
by firing malicious lasers against LiDAR. However, these efforts face 3
critical research gaps: (1) considering only one specific LiDAR (VLP-16); (2)
assuming unvalidated attack capabilities; and (3) evaluating object detectors
with limited spoofing capability modeling and setup diversity.
To fill these critical research gaps, we conduct the first large-scale
measurement study on LiDAR spoofing attack capabilities on object detectors
with 9 popular LiDARs, covering both first- and new-generation LiDARs, and 3
major types of object detectors trained on 5 different datasets. To facilitate
the measurements, we (1) identify spoofer improvements that significantly
improve the latest spoofing capability, (2) identify a new object removal
attack that overcomes the applicability limitation of the latest method to
new-generation LiDARs, and (3) perform novel mathematical modeling for both
object injection and removal attacks based on our measurement results. Through
this study, we are able to uncover a total of 15 novel findings, including not
only completely new ones due to the measurement angle novelty, but also many
that can directly challenge the latest understandings in this problem space. We
also discuss defenses.
Related papers
- Effective and Efficient Adversarial Detection for Vision-Language Models via A Single Vector [97.92369017531038]
We build a new laRge-scale Adervsarial images dataset with Diverse hArmful Responses (RADAR)
We then develop a novel iN-time Embedding-based AdveRSarial Image DEtection (NEARSIDE) method, which exploits a single vector that distilled from the hidden states of Visual Language Models (VLMs) to achieve the detection of adversarial images against benign ones in the input.
arXiv Detail & Related papers (2024-10-30T10:33:10Z) - Better Monocular 3D Detectors with LiDAR from the Past [64.6759926054061]
Camera-based 3D detectors often suffer inferior performance compared to LiDAR-based counterparts due to inherent depth ambiguities in images.
In this work, we seek to improve monocular 3D detectors by leveraging unlabeled historical LiDAR data.
We show consistent and significant performance gain across multiple state-of-the-art models and datasets with a negligible additional latency of 9.66 ms and a small storage cost.
arXiv Detail & Related papers (2024-04-08T01:38:43Z) - ADoPT: LiDAR Spoofing Attack Detection Based on Point-Level Temporal
Consistency [11.160041268858773]
Deep neural networks (DNNs) are increasingly integrated into LiDAR-based perception systems for autonomous vehicles (AVs)
We aim to address the challenge of LiDAR spoofing attacks, where attackers inject fake objects into LiDAR data and fool AVs to misinterpret their environment and make erroneous decisions.
We propose ADoPT (Anomaly Detection based on Point-level Temporal consistency), which quantitatively measures temporal consistency across consecutive frames and identifies abnormal objects based on the coherency of point clusters.
In our evaluation using the nuScenes dataset, our algorithm effectively counters various LiDAR spoofing attacks, achieving a low (
arXiv Detail & Related papers (2023-10-23T02:31:31Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Boosting 3D Object Detection by Simulating Multimodality on Point Clouds [51.87740119160152]
This paper presents a new approach to boost a single-modality (LiDAR) 3D object detector by teaching it to simulate features and responses that follow a multi-modality (LiDAR-image) detector.
The approach needs LiDAR-image data only when training the single-modality detector, and once well-trained, it only needs LiDAR data at inference.
Experimental results on the nuScenes dataset show that our approach outperforms all SOTA LiDAR-only 3D detectors.
arXiv Detail & Related papers (2022-06-30T01:44:30Z) - LiDAR Distillation: Bridging the Beam-Induced Domain Gap for 3D Object
Detection [96.63947479020631]
In many real-world applications, the LiDAR points used by mass-produced robots and vehicles usually have fewer beams than that in large-scale public datasets.
We propose the LiDAR Distillation to bridge the domain gap induced by different LiDAR beams for 3D object detection.
arXiv Detail & Related papers (2022-03-28T17:59:02Z) - Object Removal Attacks on LiDAR-based 3D Object Detectors [6.263478017242508]
Object Removal Attacks (ORAs) aim to force 3D object detectors to fail.
We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest.
Our results show that the attack is effective in degrading the performance of commonly used 3D object detection models.
arXiv Detail & Related papers (2021-02-07T05:34:14Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.