Physically Realizable Adversarial Examples for LiDAR Object Detection
- URL: http://arxiv.org/abs/2004.00543v2
- Date: Thu, 2 Apr 2020 16:02:41 GMT
- Title: Physically Realizable Adversarial Examples for LiDAR Object Detection
- Authors: James Tu, Mengye Ren, Siva Manivasagam, Ming Liang, Bin Yang, Richard
Du, Frank Cheng, Raquel Urtasun
- Abstract summary: We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
- Score: 72.0017682322147
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern autonomous driving systems rely heavily on deep learning models to
process point cloud sensory data; meanwhile, deep models have been shown to be
susceptible to adversarial attacks with visually imperceptible perturbations.
Despite the fact that this poses a security concern for the self-driving
industry, there has been very little exploration in terms of 3D perception, as
most adversarial attacks have only been applied to 2D flat images. In this
paper, we address this issue and present a method to generate universal 3D
adversarial objects to fool LiDAR detectors. In particular, we demonstrate that
placing an adversarial object on the rooftop of any target vehicle to hide the
vehicle entirely from LiDAR detectors with a success rate of 80%. We report
attack results on a suite of detectors using various input representation of
point clouds. We also conduct a pilot study on adversarial defense using data
augmentation. This is one step closer towards safer self-driving under unseen
conditions from limited training data.
Related papers
- Exploring Adversarial Robustness of LiDAR-Camera Fusion Model in
Autonomous Driving [17.618527727914163]
This study assesses the adversarial robustness of LiDAR-camera fusion models in 3D object detection.
We introduce an attack technique that, by simply adding a limited number of physically constrained adversarial points above a car, can make the car undetectable by the fusion model.
arXiv Detail & Related papers (2023-12-03T17:48:40Z) - Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in
Automated Driving System from Appearing Attacks [20.38692153553779]
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds.
Recent works show the adversary can forge non-existent cars in the prediction results with a few fake points.
We propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector.
arXiv Detail & Related papers (2023-03-17T02:20:47Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Hindsight is 20/20: Leveraging Past Traversals to Aid 3D Perception [59.2014692323323]
Small, far-away, or highly occluded objects are particularly challenging because there is limited information in the LiDAR point clouds for detecting them.
We propose a novel, end-to-end trainable Hindsight framework to extract contextual information from past data.
We show that this framework is compatible with most modern 3D detection architectures and can substantially improve their average precision on multiple autonomous driving datasets.
arXiv Detail & Related papers (2022-03-22T00:58:27Z) - 3D-VField: Learning to Adversarially Deform Point Clouds for Robust 3D
Object Detection [111.32054128362427]
In safety-critical settings, robustness on out-of-distribution and long-tail samples is fundamental to circumvent dangerous issues.
We substantially improve the generalization of 3D object detectors to out-of-domain data by taking into account deformed point clouds during training.
We propose and share open source CrashD: a synthetic dataset of realistic damaged and rare cars.
arXiv Detail & Related papers (2021-12-09T08:50:54Z) - Temporal Consistency Checks to Detect LiDAR Spoofing Attacks on
Autonomous Vehicle Perception [4.092959254671909]
Recent work has serious LiDAR spoofing attacks with alarming consequences.
In this work, we explore the use of motion as a physical invariant of genuine objects for detecting such attacks.
Preliminary design and implementation of a 3D-TC2 prototype demonstrates very promising performance.
arXiv Detail & Related papers (2021-06-15T01:36:40Z) - Fooling LiDAR Perception via Adversarial Trajectory Perturbation [13.337443990751495]
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions.
Could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation?
We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough.
arXiv Detail & Related papers (2021-03-29T04:34:31Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.