Seeing is Deceiving: Mirror-Based LiDAR Spoofing for Autonomous Vehicle Deception
- URL: http://arxiv.org/abs/2509.17253v2
- Date: Tue, 23 Sep 2025 17:34:14 GMT
- Title: Seeing is Deceiving: Mirror-Based LiDAR Spoofing for Autonomous Vehicle Deception
- Authors: Selma Yahia, Ildi Alla, Girija Bangalore Mohan, Daniel Rau, Mridula Singh, Valeria Loscri,
- Abstract summary: We show a class of low-cost, passive LiDAR spoofing attacks that exploit mirror-like surfaces.<n>These attacks require no electronics or custom fabrication and can be deployed in real settings.<n>Experiments show mirror attacks corrupt occupancy grids, induce false detections, and trigger unsafe planning and control behaviors.
- Score: 0.6865481844899556
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Autonomous vehicles (AVs) rely heavily on LiDAR sensors for accurate 3D perception. We show a novel class of low-cost, passive LiDAR spoofing attacks that exploit mirror-like surfaces to inject or remove objects from an AV's perception. Using planar mirrors to redirect LiDAR beams, these attacks require no electronics or custom fabrication and can be deployed in real settings. We define two adversarial goals: Object Addition Attacks (OAA), which create phantom obstacles, and Object Removal Attacks (ORA), which conceal real hazards. We develop geometric optics models, validate them with controlled outdoor experiments using a commercial LiDAR and an Autoware-equipped vehicle, and implement a CARLA-based simulation for scalable testing. Experiments show mirror attacks corrupt occupancy grids, induce false detections, and trigger unsafe planning and control behaviors. We discuss potential defenses (thermal sensing, multi-sensor fusion, light-fingerprinting) and their limitations.
Related papers
- LiDAR Spoofing Meets the New-Gen: Capability Improvements, Broken
Assumptions, and New Attack Strategies [26.9731228822657]
A recent line of research finds that one can manipulate the LiDAR point cloud and fool object detectors by firing malicious lasers against LiDAR.
We conduct the first large-scale measurement study on LiDAR spoofing attack capabilities on object detectors with 9 popular LiDARs.
We uncover a total of 15 novel findings, including not only completely new ones due to the measurement angle novelty, but also many that can directly challenge the latest understandings in this problem space.
arXiv Detail & Related papers (2023-03-19T03:46:27Z) - Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in
Automated Driving System from Appearing Attacks [20.38692153553779]
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds.
Recent works show the adversary can forge non-existent cars in the prediction results with a few fake points.
We propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector.
arXiv Detail & Related papers (2023-03-17T02:20:47Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Fooling LiDAR Perception via Adversarial Trajectory Perturbation [13.337443990751495]
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions.
Could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation?
We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough.
arXiv Detail & Related papers (2021-03-29T04:34:31Z) - Object Removal Attacks on LiDAR-based 3D Object Detectors [6.263478017242508]
Object Removal Attacks (ORAs) aim to force 3D object detectors to fail.
We leverage the default setting of LiDARs that record a single return signal per direction to perturb point clouds in the region of interest.
Our results show that the attack is effective in degrading the performance of commonly used 3D object detection models.
arXiv Detail & Related papers (2021-02-07T05:34:14Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Recovering and Simulating Pedestrians in the Wild [81.38135735146015]
We propose to recover the shape and motion of pedestrians from sensor readings captured in the wild by a self-driving car driving around.
We incorporate the reconstructed pedestrian assets bank in a realistic 3D simulation system.
We show that the simulated LiDAR data can be used to significantly reduce the amount of real-world data required for visual perception tasks.
arXiv Detail & Related papers (2020-11-16T17:16:32Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z) - LiDARsim: Realistic LiDAR Simulation by Leveraging the Real World [84.57894492587053]
We develop a novel simulator that captures both the power of physics-based and learning-based simulation.
We first utilize ray casting over the 3D scene and then use a deep neural network to produce deviations from the physics-based simulation.
We showcase LiDARsim's usefulness for perception algorithms-testing on long-tail events and end-to-end closed-loop evaluation on safety-critical scenarios.
arXiv Detail & Related papers (2020-06-16T17:44:35Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z) - LIBRE: The Multiple 3D LiDAR Dataset [54.25307983677663]
We present LIBRE: LiDAR Benchmarking and Reference, a first-of-its-kind dataset featuring 10 different LiDAR sensors.
LIBRE will contribute to the research community to provide a means for a fair comparison of currently available LiDARs.
It will also facilitate the improvement of existing self-driving vehicles and robotics-related software.
arXiv Detail & Related papers (2020-03-13T06:17:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.