End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering
- URL: http://arxiv.org/abs/2103.00345v1
- Date: Sat, 27 Feb 2021 22:36:32 GMT
- Title: End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering
- Authors: Ruochen Jiao, Hengyi Liang, Takami Sato, Junjie Shen, Qi Alfred Chen
and Qi Zhu
- Abstract summary: We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
- Score: 12.11406399284803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the development of advanced driver-assistance systems (ADAS) and
autonomous vehicles, machine learning techniques that are based on deep neural
networks (DNNs) have been widely used for vehicle perception. These techniques
offer significant improvement on average perception accuracy over traditional
methods, however, have been shown to be susceptible to adversarial attacks,
where small perturbations in the input may cause significant errors in the
perception results and lead to system failure. Most prior works addressing such
adversarial attacks focus only on the sensing and perception modules. In this
work, we propose an end-to-end approach that addresses the impact of
adversarial attacks throughout perception, planning, and control modules. In
particular, we choose a target ADAS application, the automated lane centering
system in OpenPilot, quantify the perception uncertainty under adversarial
attacks, and design a robust planning and control module accordingly based on
the uncertainty analysis. We evaluate our proposed approach using both the
public dataset and production-grade autonomous driving simulator. The
experiment results demonstrate that our approach can effectively mitigate the
impact of adversarial attacks and can achieve 55% to 90% improvement over the
original OpenPilot.
Related papers
- Attack End-to-End Autonomous Driving through Module-Wise Noise [4.281151553151594]
In this paper, we conduct comprehensive adversarial security research on the modular end-to-end autonomous driving model.
We thoroughly consider the potential vulnerabilities in the model inference process and design a universal attack scheme through module-wise noise injection.
We conduct large-scale experiments on the full-stack autonomous driving model and demonstrate that our attack method outperforms previous attack methods.
arXiv Detail & Related papers (2024-09-12T02:19:16Z) - CANEDERLI: On The Impact of Adversarial Training and Transferability on CAN Intrusion Detection Systems [17.351539765989433]
A growing integration of vehicles with external networks has led to a surge in attacks targeting their Controller Area Network (CAN) internal bus.
As a countermeasure, various Intrusion Detection Systems (IDSs) have been suggested in the literature to prevent and mitigate these threats.
Most of these systems rely on data-driven approaches such as Machine Learning (ML) and Deep Learning (DL) models.
In this paper, we present CANEDERLI, a novel framework for securing CAN-based IDSs.
arXiv Detail & Related papers (2024-04-06T14:54:11Z) - Defensive Perception: Estimation and Monitoring of Neural Network
Performance under Deployment [0.6982738885923204]
We propose a method for addressing the issue of unnoticed catastrophic deployment and domain shift in neural networks for semantic segmentation in autonomous driving.
Our approach is based on the idea that deep learning-based perception for autonomous driving is uncertain and best represented as a probability distribution.
arXiv Detail & Related papers (2023-08-11T07:45:36Z) - AdvDO: Realistic Adversarial Attacks for Trajectory Prediction [87.96767885419423]
Trajectory prediction is essential for autonomous vehicles to plan correct and safe driving behaviors.
We devise an optimization-based adversarial attack framework to generate realistic adversarial trajectories.
Our attack can lead an AV to drive off road or collide into other vehicles in simulation.
arXiv Detail & Related papers (2022-09-19T03:34:59Z) - Robust Trajectory Prediction against Adversarial Attacks [84.10405251683713]
Trajectory prediction using deep neural networks (DNNs) is an essential component of autonomous driving systems.
These methods are vulnerable to adversarial attacks, leading to serious consequences such as collisions.
In this work, we identify two key ingredients to defend trajectory prediction models against adversarial attacks.
arXiv Detail & Related papers (2022-07-29T22:35:05Z) - A Certifiable Security Patch for Object Tracking in Self-Driving Systems
via Historical Deviation Modeling [22.753164675538457]
We present the first systematic research on the security of object tracking in self-driving cars.
We prove the mainstream multi-object tracker (MOT) based on Kalman Filter (KF) is unsafe even with an enabled multi-sensor fusion mechanism.
We propose a simple yet effective security patch for KF-based MOT, the core of which is an adaptive strategy to balance the focus of KF on observations and predictions.
arXiv Detail & Related papers (2022-07-18T12:30:24Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.