Physical Adversarial Attack meets Computer Vision: A Decade Survey
- URL: http://arxiv.org/abs/2209.15179v3
- Date: Sun, 1 Oct 2023 05:06:56 GMT
- Title: Physical Adversarial Attack meets Computer Vision: A Decade Survey
- Authors: Hui Wei, Hao Tang, Xuemei Jia, Zhixiang Wang, Hanxun Yu, Zhubo Li,
Shin'ichi Satoh, Luc Van Gool, Zheng Wang
- Abstract summary: This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
- Score: 57.46379460600939
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the impressive achievements of Deep Neural Networks (DNNs) in
computer vision, their vulnerability to adversarial attacks remains a critical
concern. Extensive research has demonstrated that incorporating sophisticated
perturbations into input images can lead to a catastrophic degradation in DNNs'
performance. This perplexing phenomenon not only exists in the digital space
but also in the physical world. Consequently, it becomes imperative to evaluate
the security of DNNs-based systems to ensure their safe deployment in
real-world scenarios, particularly in security-sensitive applications. To
facilitate a profound understanding of this topic, this paper presents a
comprehensive overview of physical adversarial attacks. Firstly, we distill
four general steps for launching physical adversarial attacks. Building upon
this foundation, we uncover the pervasive role of artifacts carrying
adversarial perturbations in the physical world. These artifacts influence each
step. To denote them, we introduce a new term: adversarial medium. Then, we
take the first step to systematically evaluate the performance of physical
adversarial attacks, taking the adversarial medium as a first attempt. Our
proposed evaluation metric, hiPAA, comprises six perspectives: Effectiveness,
Stealthiness, Robustness, Practicability, Aesthetics, and Economics. We also
provide comparative results across task categories, together with insightful
observations and suggestions for future research directions.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Contextual adversarial attack against aerial detection in the physical
world [8.826711009649133]
Deep Neural Networks (DNNs) have been extensively utilized in aerial detection.
Physical attacks have gradually become a hot issue due to they are more practical in the real world.
We propose an innovative contextual attack method against aerial detection in real scenarios.
arXiv Detail & Related papers (2023-02-27T02:57:58Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - A Survey on Physical Adversarial Attack in Computer Vision [7.053905447737444]
Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
arXiv Detail & Related papers (2022-09-28T17:23:52Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Evaluating the Robustness of Semantic Segmentation for Autonomous
Driving against Real-World Adversarial Patch Attacks [62.87459235819762]
In a real-world scenario like autonomous driving, more attention should be devoted to real-world adversarial examples (RWAEs)
This paper presents an in-depth evaluation of the robustness of popular SS models by testing the effects of both digital and real-world adversarial patches.
arXiv Detail & Related papers (2021-08-13T11:49:09Z) - Measurement-driven Security Analysis of Imperceptible Impersonation
Attacks [54.727945432381716]
We study the exploitability of Deep Neural Network-based Face Recognition systems.
We show that factors such as skin color, gender, and age, impact the ability to carry out an attack on a specific target victim.
We also study the feasibility of constructing universal attacks that are robust to different poses or views of the attacker's face.
arXiv Detail & Related papers (2020-08-26T19:27:27Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.