What Would Trojans Do? Exploiting Partial-Information Vulnerabilities in Autonomous Vehicle Sensing
- URL: http://arxiv.org/abs/2303.03470v4
- Date: Thu, 13 Mar 2025 20:57:41 GMT
- Title: What Would Trojans Do? Exploiting Partial-Information Vulnerabilities in Autonomous Vehicle Sensing
- Authors: R. Spencer Hallyburton, Qingzhao Zhang, Z. Morley Mao, Michael Reiter, Miroslav Pajic,
- Abstract summary: Tier 1 manufacturers have already exposed vulnerabilities to attacks introducing Trojans that can stealthily alter sensor outputs.<n>We analyze the feasible capability and safety-critical outcomes of an attack on sensing at a cyber level.<n>We introduce security-aware sensor fusion incorporating a probabilistic data-asymmetry monitor and scalable track-to-track fusion of 3D LiDAR and monocular detections.
- Score: 14.776762612634906
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Safety-critical sensors in autonomous vehicles (AVs) form an essential part of the vehicle's trusted computing base (TCB), yet they are highly susceptible to attacks. Alarmingly, Tier 1 manufacturers have already exposed vulnerabilities to attacks introducing Trojans that can stealthily alter sensor outputs. We analyze the feasible capability and safety-critical outcomes of an attack on sensing at a cyber level. To further address these threats, we design realistic attacks in AV simulators and real-world datasets under two practical constraints: attackers (1) possess only partial information and (2) are constrained by data structures that maintain sensor integrity.Examining the role of camera and LiDAR in multi-sensor AVs, we find that attacks targeting only the camera have minimal safety impact due to the sensor fusion system's strong reliance on 3D data from LiDAR. This reliance makes LiDAR-based attacks especially detrimental to safety. To mitigate the vulnerabilities, we introduce security-aware sensor fusion incorporating (1) a probabilistic data-asymmetry monitor and (2) a scalable track-to-track fusion of 3D LiDAR and monocular detections (T2T-3DLM). We demonstrate that these methods significantly diminish attack success rate.
Related papers
- IDU-Detector: A Synergistic Framework for Robust Masquerader Attack Detection [3.3821216642235608]
In the digital age, users store personal data in corporate databases, making data security central to enterprise management.
Given the extensive attack surface, assets face challenges like weak authentication, vulnerabilities, and malware.
We introduce the IDU-Detector, integrating Intrusion Detection Systems (IDS) with User and Entity Behavior Analytics (UEBA)
This integration monitors unauthorized access, bridges system gaps, ensures continuous monitoring, and enhances threat identification.
arXiv Detail & Related papers (2024-11-09T13:03:29Z) - Sensor Deprivation Attacks for Stealthy UAV Manipulation [51.9034385791934]
Unmanned Aerial Vehicles autonomously perform tasks with the use of state-of-the-art control algorithms.
In this work, we propose a multi-part.
Sensor Deprivation Attacks (SDAs), aiming to stealthily impact.
process control via sensor reconfiguration.
arXiv Detail & Related papers (2024-10-14T23:03:58Z) - A Comprehensive Study of the Robustness for LiDAR-based 3D Object
Detectors against Adversarial Attacks [84.10546708708554]
3D object detectors are increasingly crucial for security-critical tasks.
It is imperative to understand their robustness against adversarial attacks.
This paper presents the first comprehensive evaluation and analysis of the robustness of LiDAR-based 3D detectors under adversarial attacks.
arXiv Detail & Related papers (2022-12-20T13:09:58Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Security Analysis of Camera-LiDAR Semantic-Level Fusion Against
Black-Box Attacks on Autonomous Vehicles [6.477833151094911]
Recently, it was shown that LiDAR-based perception built on deep neural networks is vulnerable to spoofing attacks.
We perform the first analysis of camera-LiDAR fusion under spoofing attacks and the first security analysis of semantic fusion in any AV context.
We find that semantic camera-LiDAR fusion exhibits widespread vulnerability to frustum attacks with between 70% and 90% success against target models.
arXiv Detail & Related papers (2021-06-13T21:59:19Z) - Exploring Adversarial Robustness of Multi-Sensor Perception Systems in
Self Driving [87.3492357041748]
In this paper, we showcase practical susceptibilities of multi-sensor detection by placing an adversarial object on top of a host vehicle.
Our experiments demonstrate that successful attacks are primarily caused by easily corrupted image features.
Towards more robust multi-modal perception systems, we show that adversarial training with feature denoising can boost robustness to such attacks significantly.
arXiv Detail & Related papers (2021-01-17T21:15:34Z) - Towards Robust LiDAR-based Perception in Autonomous Driving: General
Black-box Adversarial Sensor Attack and Countermeasures [24.708895480220733]
LiDAR-based perception is vulnerable to spoofing attacks, in which adversaries spoof a fake vehicle in front of a victim self-driving car.
We perform the first study to explore the general vulnerability of current LiDAR-based perception architectures.
We construct the first black-box spoofing attack based on our identified vulnerability, which universally achieves around 80% mean success rates.
arXiv Detail & Related papers (2020-06-30T17:07:45Z) - Physically Realizable Adversarial Examples for LiDAR Object Detection [72.0017682322147]
We present a method to generate universal 3D adversarial objects to fool LiDAR detectors.
In particular, we demonstrate that placing an adversarial object on the rooftop of any target vehicle to hide the vehicle entirely from LiDAR detectors with a success rate of 80%.
This is one step closer towards safer self-driving under unseen conditions from limited training data.
arXiv Detail & Related papers (2020-04-01T16:11:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.