A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling
- URL: http://arxiv.org/abs/2207.08556v1
- Date: Mon, 18 Jul 2022 12:30:24 GMT
- Title: A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling
- Authors: Xudong Pan, Qifan Xiao, Mi Zhang, Min Yang
- Abstract summary: We present the first systematic research on the security of object tracking in self-driving cars.
We prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism.
We propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions.
- Score: 22.753164675538457
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-driving cars (SDC) commonly implement the perception pipeline to detect
the surrounding obstacles and track their moving trajectories, which lays the
ground for the subsequent driving decision making process. Although the
security of obstacle detection in SDC is intensively studied, not until very
recently the attackers start to exploit the vulnerability of the tracking
module. Compared with solely attacking the object detectors, this new attack
strategy influences the driving decision more effectively with less attack
budgets. However, little is known on whether the revealed vulnerability remains
effective in end-to-end self-driving systems and, if so, how to mitigate the
threat.
In this paper, we present the first systematic research on the security of
object tracking in SDC. Through a comprehensive case study on the full
perception pipeline of a popular open-sourced self-driving system, Baidu's
Apollo, we prove the mainstream multi-object tracker (MOT) based on Kalman
Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism. Our
root cause analysis reveals, the vulnerability is innate to the design of
KF-based MOT, which shall error-handle the prediction results from the object
detectors yet the adopted KF algorithm is prone to trust the observation more
when its deviation from the prediction is larger. To address this design flaw,
we propose a simple yet effective security patch for KF-based MOT, the core of
which is an adaptive strategy to balance the focus of KF on observations and
predictions according to the anomaly index of the observation-prediction
deviation, and has certified effectiveness against a generalized hijacking
attack model. Extensive evaluation on $4$ KF-based existing MOT implementations
(including 2D and 3D, academic and Apollo ones) validate the defense
effectiveness and the trivial performance overhead of our approach.
Related papers
- ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - A Safety-Adapted Loss for Pedestrian Detection in Automated Driving [13.676179470606844]
In safety-critical domains, errors by the object detector may endanger pedestrians and other vulnerable road users.
We propose a safety-aware loss variation that leverages the estimated per-pedestrian criticality scores during training.
arXiv Detail & Related papers (2024-02-05T13:16:38Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Exorcising ''Wraith'': Protecting LiDAR-based Object Detector in
Automated Driving System from Appearing Attacks [20.38692153553779]
Automated driving systems rely on 3D object detectors to recognize possible obstacles from LiDAR point clouds.
Recent works show the adversary can forge non-existent cars in the prediction results with a few fake points.
We propose a novel plug-and-play defensive module which works by side of a trained LiDAR-based object detector.
arXiv Detail & Related papers (2023-03-17T02:20:47Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Sequential Attacks on Kalman Filter-based Forward Collision Warning
Systems [23.117910305213016]
We study adversarial attacks on Kalman Filter (KF) as part of the machine-human hybrid system of Forward Collision Warning.
Our attack goal is to negatively affect human braking decisions by causing KF to output incorrect state estimations.
We accomplish this by sequentially manipulating measure ments fed into the KF, and propose a novel Model Predictive Control (MPC) approach to compute the optimal manipulation.
arXiv Detail & Related papers (2020-12-16T02:26:27Z) - Automotive Radar Interference Mitigation with Unfolded Robust PCA based
on Residual Overcomplete Auto-Encoder Blocks [88.46770122522697]
In autonomous driving, radar systems play an important role in detecting targets such as other vehicles on the road.
Deep learning methods for automotive radar interference mitigation can succesfully estimate the amplitude of targets, but fail to recover the phase of the respective targets.
We propose an efficient and effective technique that is able to estimate both amplitude and phase in the presence of interference.
arXiv Detail & Related papers (2020-10-14T09:41:06Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.