Does Physical Adversarial Example Really Matter to Autonomous Driving?
Towards System-Level Effect of Adversarial Object Evasion Attack
- URL: http://arxiv.org/abs/2308.11894v1
- Date: Wed, 23 Aug 2023 03:40:47 GMT
- Title: Does Physical Adversarial Example Really Matter to Autonomous Driving?
Towards System-Level Effect of Adversarial Object Evasion Attack
- Authors: Ningfei Wang, Yunpeng Luo, Takami Sato, Kaidi Xu, Qi Alfred Chen
- Abstract summary: In autonomous driving (AD), accurate perception is indispensable to achieving safe and secure driving.
Physical adversarial object evasion attacks are especially severe in AD.
All existing literature evaluates their attack effect at the targeted AI component level but not at the system level.
We propose SysAdv, a novel system-driven attack design in the AD context.
- Score: 39.08524903081768
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In autonomous driving (AD), accurate perception is indispensable to achieving
safe and secure driving. Due to its safety-criticality, the security of AD
perception has been widely studied. Among different attacks on AD perception,
the physical adversarial object evasion attacks are especially severe. However,
we find that all existing literature only evaluates their attack effect at the
targeted AI component level but not at the system level, i.e., with the entire
system semantics and context such as the full AD pipeline. Thereby, this raises
a critical research question: can these existing researches effectively achieve
system-level attack effects (e.g., traffic rule violations) in the real-world
AD context? In this work, we conduct the first measurement study on whether and
how effectively the existing designs can lead to system-level effects,
especially for the STOP sign-evasion attacks due to their popularity and
severity. Our evaluation results show that all the representative prior works
cannot achieve any system-level effects. We observe two design limitations in
the prior works: 1) physical model-inconsistent object size distribution in
pixel sampling and 2) lack of vehicle plant model and AD system model
consideration. Then, we propose SysAdv, a novel system-driven attack design in
the AD context and our evaluation results show that the system-level effects
can be significantly improved, i.e., the violation rate increases by around
70%.
Related papers
- ControlLoc: Physical-World Hijacking Attack on Visual Perception in Autonomous Driving [30.286501966393388]
A digital hijacking attack has been proposed to cause dangerous driving scenarios.
We introduce a novel physical-world adversarial patch attack, ControlLoc, designed to exploit hijacking vulnerabilities in entire Autonomous Driving (AD) visual perception.
arXiv Detail & Related papers (2024-06-09T14:53:50Z) - SlowPerception: Physical-World Latency Attack against Visual Perception in Autonomous Driving [26.669905199110755]
High latency in visual perception components can lead to safety risks, such as vehicle collisions.
We introduce SlowPerception, the first physical-world latency attack against AD perception, via generating projector-based universal perturbations.
Our SlowPerception achieves second-level latency in physical-world settings, with an average latency of 2.5 seconds across different AD perception systems.
arXiv Detail & Related papers (2024-06-09T14:30:18Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Invisible for both Camera and LiDAR: Security of Multi-Sensor Fusion
based Perception in Autonomous Driving Under Physical-World Attacks [62.923992740383966]
We present the first study of security issues of MSF-based perception in AD systems.
We generate a physically-realizable, adversarial 3D-printed object that misleads an AD system to fail in detecting it and thus crash into it.
Our results show that the attack achieves over 90% success rate across different object types and MSF.
arXiv Detail & Related papers (2021-06-17T05:11:07Z) - End-to-end Uncertainty-based Mitigation of Adversarial Attacks to
Automated Lane Centering [12.11406399284803]
We propose an end-to-end approach that addresses the impact of adversarial attacks throughout perception, planning, and control modules.
Our approach can effectively mitigate the impact of adversarial attacks and can achieve 55% to 90% improvement over the original OpenPilot.
arXiv Detail & Related papers (2021-02-27T22:36:32Z) - Dirty Road Can Attack: Security of Deep Learning based Automated Lane
Centering under Physical-World Attack [38.3805893581568]
We study the security of state-of-the-art deep learning based ALC systems under physical-world adversarial attacks.
We formulate the problem with a safety-critical attack goal, and a novel and domain-specific attack vector: dirty road patches.
We evaluate our attack on a production ALC using 80 scenarios from real-world driving traces.
arXiv Detail & Related papers (2020-09-14T19:22:39Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Towards robust sensing for Autonomous Vehicles: An adversarial
perspective [82.83630604517249]
It is of primary importance that the resulting decisions are robust to perturbations.
Adversarial perturbations are purposefully crafted alterations of the environment or of the sensory measurements.
A careful evaluation of the vulnerabilities of their sensing system(s) is necessary in order to build and deploy safer systems.
arXiv Detail & Related papers (2020-07-14T05:25:15Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.