Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack
- URL: http://arxiv.org/abs/2003.01782v1
- Date: Tue, 3 Mar 2020 20:35:25 GMT
- Title: Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack
- Authors: Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin and
Qi Alfred Chen
- Abstract summary: Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.
In this work, we design and implement the first systematic approach to attack real-world DNN-based LKASes.
We identify dirty road patches as a novel and domain-specific threat model for practicality and stealthiness.
- Score: 38.3805893581568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane-Keeping Assistance System (LKAS) is convenient and widely available
today, but also extremely security and safety critical. In this work, we design
and implement the first systematic approach to attack real-world DNN-based
LKASes. We identify dirty road patches as a novel and domain-specific threat
model for practicality and stealthiness. We formulate the attack as an
optimization problem, and address the challenge from the inter-dependencies
among attacks on consecutive camera frames. We evaluate our approach on a
state-of-the-art LKAS and our preliminary results show that our attack can
successfully cause it to drive off lane boundaries within as short as 1.3
seconds.
Related papers
- A Novel Approach to Guard from Adversarial Attacks using Stable Diffusion [0.0]
Our proposal suggests a different approach to the AI Guardian framework.
Instead of including adversarial examples in the training process, we propose training the AI system without them.
This aims to create a system that is inherently resilient to a wider range of attacks.
arXiv Detail & Related papers (2024-05-03T04:08:15Z) - Mutual-modality Adversarial Attack with Semantic Perturbation [81.66172089175346]
We propose a novel approach that generates adversarial attacks in a mutual-modality optimization scheme.
Our approach outperforms state-of-the-art attack methods and can be readily deployed as a plug-and-play solution.
arXiv Detail & Related papers (2023-12-20T05:06:01Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Towards Practical Deployment-Stage Backdoor Attack on Deep Neural
Networks [5.231607386266116]
We study the realistic threat of deployment-stage backdoor attacks on deep learning models.
We propose the first gray-box and physically realizable weights attack algorithm for backdoor injection.
Our results suggest the effectiveness and practicality of the proposed attack algorithm.
arXiv Detail & Related papers (2021-11-25T08:25:27Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.