Asymmetry Vulnerability and Physical Attacks on Online Map Construction for Autonomous Driving
- URL: http://arxiv.org/abs/2509.06071v1
- Date: Sun, 07 Sep 2025 14:26:21 GMT
- Title: Asymmetry Vulnerability and Physical Attacks on Online Map Construction for Autonomous Driving
- Authors: Yang Lou, Haibo Hu, Qun Song, Qian Xu, Yi Zhu, Rui Tan, Wei-Bin Lee, Jianping Wang,
- Abstract summary: We present a systematic vulnerability analysis of online map construction models.<n>In asymmetric scenes like forks or merges, this bias often causes the model to mistakenly predict a straight boundary that mirrors the opposite side.<n>We propose a novel two-stage attack framework capable of manipulating online constructed maps.
- Score: 15.060553970759038
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: High-definition maps provide precise environmental information essential for prediction and planning in autonomous driving systems. Due to the high cost of labeling and maintenance, recent research has turned to online HD map construction using onboard sensor data, offering wider coverage and more timely updates for autonomous vehicles. However, the robustness of online map construction under adversarial conditions remains underexplored. In this paper, we present a systematic vulnerability analysis of online map construction models, which reveals that these models exhibit an inherent bias toward predicting symmetric road structures. In asymmetric scenes like forks or merges, this bias often causes the model to mistakenly predict a straight boundary that mirrors the opposite side. We demonstrate that this vulnerability persists in the real-world and can be reliably triggered by obstruction or targeted interference. Leveraging this vulnerability, we propose a novel two-stage attack framework capable of manipulating online constructed maps. First, our method identifies vulnerable asymmetric scenes along the victim AV's potential route. Then, we optimize the location and pattern of camera-blinding attacks and adversarial patch attacks. Evaluations on a public AD dataset demonstrate that our attacks can degrade mapping accuracy by up to 9.9%, render up to 44% of targeted routes unreachable, and increase unsafe planned trajectory rates, colliding with real-world road boundaries, by up to 27%. These attacks are also validated on a real-world testbed vehicle. We further analyze root causes of the symmetry bias, attributing them to training data imbalance, model architecture, and map element representation. To the best of our knowledge, this study presents the first vulnerability assessment of online map construction models and introduces the first digital and physical attack against them.
Related papers
- Delving into Mapping Uncertainty for Mapless Trajectory Prediction [41.70949328930293]
Recent advances in autonomous driving are moving towards mapless approaches.<n>High-Definition (HD) maps are generated online directly from sensor data, reducing the need for expensive labeling and maintenance.<n>In this work, we analyze the driving scenarios in which mapping uncertainty has the greatest positive impact on trajectory prediction.<n>We propose a novel Proprioceptive Scenario Gating that adaptively integrates map uncertainty into trajectory prediction.
arXiv Detail & Related papers (2025-07-24T15:13:11Z) - Cluster-Aware Attacks on Graph Watermarks [50.19105800063768]
We introduce a cluster-aware threat model in which adversaries apply community-guided modifications to evade detection.<n>Our results show that cluster-aware attacks can reduce attribution accuracy by up to 80% more than random baselines.<n>We propose a lightweight embedding enhancement that distributes watermark nodes across graph communities.
arXiv Detail & Related papers (2025-04-24T22:49:28Z) - TopoSD: Topology-Enhanced Lane Segment Perception with SDMap Prior [70.84644266024571]
We propose to train a perception model to "see" standard definition maps (SDMaps)
We encode SDMap elements into neural spatial map representations and instance tokens, and then incorporate such complementary features as prior information.
Based on the lane segment representation framework, the model simultaneously predicts lanes, centrelines and their topology.
arXiv Detail & Related papers (2024-11-22T06:13:42Z) - Your Car Tells Me Where You Drove: A Novel Path Inference Attack via CAN Bus and OBD-II Data [57.22545280370174]
On Path Diagnostic - Intrusion & Inference (OPD-II) is a novel path inference attack leveraging a physical car model and a map matching algorithm.
We implement our attack on a set of four different cars and a total number of 41 tracks in different road and traffic scenarios.
arXiv Detail & Related papers (2024-06-30T04:21:46Z) - A First Physical-World Trajectory Prediction Attack via LiDAR-induced Deceptions in Autonomous Driving [23.08193005790747]
Existing attacks compromise the prediction model of a victim AV.
We propose a novel two-stage attack framework to realize the single-point attack.
Our attack causes a collision rate of up to 63% and various hazardous responses of the victim AV.
arXiv Detail & Related papers (2024-06-17T16:26:00Z) - Hacking Predictors Means Hacking Cars: Using Sensitivity Analysis to Identify Trajectory Prediction Vulnerabilities for Autonomous Driving Security [1.949927790632678]
In this paper, we conduct a sensitivity analysis on two trajectory prediction models, Trajectron++ and AgentFormer.
The analysis reveals that between all inputs, almost all of the perturbation sensitivities for both models lie only within the most recent position and velocity states.
We additionally demonstrate that, despite dominant sensitivity on state history perturbations, an undetectable image map perturbation can induce large prediction error increases in both models.
arXiv Detail & Related papers (2024-01-18T18:47:29Z) - Adversarial Backdoor Attack by Naturalistic Data Poisoning on Trajectory
Prediction in Autonomous Driving [18.72382517467458]
We propose a novel adversarial backdoor attack against trajectory prediction models.
Our attack affects the victim at training time via naturalistic, hence stealthy, poisoned samples crafted using a novel two-step approach.
We show that the proposed attack is highly effective, as it can significantly hinder the performance of prediction models.
arXiv Detail & Related papers (2023-06-27T19:15:06Z) - Avoid Adversarial Adaption in Federated Learning by Multi-Metric
Investigations [55.2480439325792]
Federated Learning (FL) facilitates decentralized machine learning model training, preserving data privacy, lowering communication costs, and boosting model performance through diversified data sources.
FL faces vulnerabilities such as poisoning attacks, undermining model integrity with both untargeted performance degradation and targeted backdoor attacks.
We define a new notion of strong adaptive adversaries, capable of adapting to multiple objectives simultaneously.
MESAS is the first defense robust against strong adaptive adversaries, effective in real-world data scenarios, with an average overhead of just 24.37 seconds.
arXiv Detail & Related papers (2023-06-06T11:44:42Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Backdoor Attacks on the DNN Interpretation System [16.587968446342995]
Interpretability is crucial to understand the inner workings of deep neural networks (DNNs)
We design a backdoor attack that alters the saliency map produced by the network for an input image only with injected trigger.
We show that our attacks constitute a serious security threat when deploying deep learning models developed by untrusty sources.
arXiv Detail & Related papers (2020-11-21T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.