Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack
- URL: http://arxiv.org/abs/2003.01782v1
- Date: Tue, 3 Mar 2020 20:35:25 GMT
- Title: Security of Deep Learning based Lane Keeping System under Physical-World
Adversarial Attack
- Authors: Takami Sato, Junjie Shen, Ningfei Wang, Yunhan Jack Jia, Xue Lin and
Qi Alfred Chen
- Abstract summary: Lane-Keeping Assistance System (LKAS) is convenient and widely available today, but also extremely security and safety critical.
In this work, we design and implement the first systematic approach to attack real-world DNN-based LKASes.
We identify dirty road patches as a novel and domain-specific threat model for practicality and stealthiness.
- Score: 38.3805893581568
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Lane-Keeping Assistance System (LKAS) is convenient and widely available
today, but also extremely security and safety critical. In this work, we design
and implement the first systematic approach to attack real-world DNN-based
LKASes. We identify dirty road patches as a novel and domain-specific threat
model for practicality and stealthiness. We formulate the attack as an
optimization problem, and address the challenge from the inter-dependencies
among attacks on consecutive camera frames. We evaluate our approach on a
state-of-the-art LKAS and our preliminary results show that our attack can
successfully cause it to drive off lane boundaries within as short as 1.3
seconds.
Related papers
- Black-Box Adversarial Attack on Vision Language Models for Autonomous Driving [65.61999354218628]
We take the first step toward designing black-box adversarial attacks specifically targeting vision-language models (VLMs) in autonomous driving systems.
We propose Cascading Adversarial Disruption (CAD), which targets low-level reasoning breakdown by generating and injecting semantics.
We present Risky Scene Induction, which addresses dynamic adaptation by leveraging a surrogate VLM to understand and construct high-level risky scenarios.
arXiv Detail & Related papers (2025-01-23T11:10:02Z) - Less is More: A Stealthy and Efficient Adversarial Attack Method for DRL-based Autonomous Driving Policies [2.9965913883475137]
We present a stealthy and efficient adversarial attack method for DRL-based autonomous driving policies.
We train the adversary to learn the optimal policy for attacking at critical moments without domain knowledge.
Our method achieves more than 90% collision rate within three attacks in most cases.
arXiv Detail & Related papers (2024-12-04T06:11:09Z) - From Sands to Mansions: Simulating Full Attack Chain with LLM-Organized Knowledge [10.065241604400223]
Multi-stage attack simulations offer a promising approach to enhance system evaluation efficiency.
simulating a full attack chain is complex and requires significant time and expertise from security professionals.
We introduce Aurora, a system that autonomously simulates full attack chains based on external attack tools and threat intelligence reports.
arXiv Detail & Related papers (2024-07-24T01:33:57Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Targeted Physical-World Attention Attack on Deep Learning Models in Road
Sign Recognition [79.50450766097686]
This paper proposes the targeted attention attack (TAA) method for real world road sign attack.
Experimental results validate that the TAA method improves the attack successful rate (nearly 10%) and reduces the perturbation loss (about a quarter) compared with the popular RP2 method.
arXiv Detail & Related papers (2020-10-09T02:31:34Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - A Self-supervised Approach for Adversarial Robustness [105.88250594033053]
Adversarial examples can cause catastrophic mistakes in Deep Neural Network (DNNs) based vision systems.
This paper proposes a self-supervised adversarial training mechanism in the input space.
It provides significant robustness against the textbfunseen adversarial attacks.
arXiv Detail & Related papers (2020-06-08T20:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.