Fooling LiDAR Perception via Adversarial Trajectory Perturbation
- URL: http://arxiv.org/abs/2103.15326v1
- Date: Mon, 29 Mar 2021 04:34:31 GMT
- Title: Fooling LiDAR Perception via Adversarial Trajectory Perturbation
- Authors: Yiming Li and Congcong Wen and Felix Juefei-Xu and Chen Feng
- Abstract summary: LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions.
Could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation?
We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough.
- Score: 13.337443990751495
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: LiDAR point clouds collected from a moving vehicle are functions of its
trajectories, because the sensor motion needs to be compensated to avoid
distortions. When autonomous vehicles are sending LiDAR point clouds to deep
networks for perception and planning, could the motion compensation
consequently become a wide-open backdoor in those networks, due to both the
adversarial vulnerability of deep learning and GPS-based vehicle trajectory
estimation that is susceptible to wireless spoofing? We demonstrate such
possibilities for the first time: instead of directly attacking point cloud
coordinates which requires tampering with the raw LiDAR readings, only
adversarial spoofing of a self-driving car's trajectory with small
perturbations is enough to make safety-critical objects undetectable or
detected with incorrect positions. Moreover, polynomial trajectory perturbation
is developed to achieve a temporally-smooth and highly-imperceptible attack.
Extensive experiments on 3D object detection have shown that such attacks not
only lower the performance of the state-of-the-art detectors effectively, but
also transfer to other detectors, raising a red flag for the community. The
code is available on https://ai4ce.github.io/FLAT/.
Related papers
- Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Unsupervised Adaptation from Repeated Traversals for Autonomous Driving [54.59577283226982]
Self-driving cars must generalize to the end-user's environment to operate reliably.
One potential solution is to leverage unlabeled data collected from the end-users' environments.
There is no reliable signal in the target domain to supervise the adaptation process.
We show that this simple additional assumption is sufficient to obtain a potent signal that allows us to perform iterative self-training of 3D object detectors on the target domain.
arXiv Detail & Related papers (2023-03-27T15:07:55Z) - Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in
Automated Driving System from Appearing Attacks [20.38692153553779]
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds.
Recent works show the adversary can forge non-existent cars in the prediction results with a few fake points.
We propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector.
arXiv Detail & Related papers (2023-03-17T02:20:47Z) - NVRadarNet: Real-Time Radar Obstacle and Free Space Detection for
Autonomous Driving [57.03126447713602]
We present a deep neural network (DNN) that detects dynamic obstacles and drivable free space using automotive RADAR sensors.
The network runs faster than real time on an embedded GPU and shows good generalization across geographic regions.
arXiv Detail & Related papers (2022-09-29T01:30:34Z) - Embracing Single Stride 3D Object Detector with Sparse Transformer [63.179720817019096]
In LiDAR-based 3D object detection for autonomous driving, the ratio of the object size to input scene size is significantly smaller compared to 2D detection cases.
Many 3D detectors directly follow the common practice of 2D detectors, which downsample the feature maps even after quantizing the point clouds.
We propose Single-stride Sparse Transformer (SST) to maintain the original resolution from the beginning to the end of the network.
arXiv Detail & Related papers (2021-12-13T02:12:02Z) - Temporal Consistency Checks to Detect LiDAR Spoofing Attacks on
Autonomous Vehicle Perception [4.092959254671909]
Recent work has serious LiDAR spoofing attacks with alarming consequences.
In this work, we explore the use of motion as a physical invariant of genuine objects for detecting such attacks.
Preliminary design and implementation of a 3D-TC2 prototype demonstrates very promising performance.
arXiv Detail & Related papers (2021-06-15T01:36:40Z) - Exploiting Playbacks in Unsupervised Domain Adaptation for 3D Object
Detection [55.12894776039135]
State-of-the-art 3D object detectors, based on deep learning, have shown promising accuracy but are prone to over-fit to domain idiosyncrasies.
We propose a novel learning approach that drastically reduces this gap by fine-tuning the detector on pseudo-labels in the target domain.
We show, on five autonomous driving datasets, that fine-tuning the detector on these pseudo-labels substantially reduces the domain gap to new driving environments.
arXiv Detail & Related papers (2021-03-26T01:18:11Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.