A Survey on Physical Adversarial Attack in Computer Vision
- URL: http://arxiv.org/abs/2209.14262v3
- Date: Mon, 18 Sep 2023 05:47:21 GMT
- Title: A Survey on Physical Adversarial Attack in Computer Vision
- Authors: Donghua Wang, Wen Yao, Tingsong Jiang, Guijian Tang, Xiaoqian Chen
- Abstract summary: Deep neural networks (DNNs) have been demonstrated to be vulnerable to adversarial examples crafted by malicious tiny noise.
With the increasing deployment of the DNN-based system in the real world, strengthening the robustness of these systems is an emergency.
- Score: 7.053905447737444
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Over the past decade, deep learning has revolutionized conventional tasks
that rely on hand-craft feature extraction with its strong feature learning
capability, leading to substantial enhancements in traditional tasks. However,
deep neural networks (DNNs) have been demonstrated to be vulnerable to
adversarial examples crafted by malicious tiny noise, which is imperceptible to
human observers but can make DNNs output the wrong result. Existing adversarial
attacks can be categorized into digital and physical adversarial attacks. The
former is designed to pursue strong attack performance in lab environments
while hardly remaining effective when applied to the physical world. In
contrast, the latter focus on developing physical deployable attacks, thus
exhibiting more robustness in complex physical environmental conditions.
Recently, with the increasing deployment of the DNN-based system in the real
world, strengthening the robustness of these systems is an emergency, while
exploring physical adversarial attacks exhaustively is the precondition. To
this end, this paper reviews the evolution of physical adversarial attacks
against DNN-based computer vision tasks, expecting to provide beneficial
information for developing stronger physical adversarial attacks. Specifically,
we first proposed a taxonomy to categorize the current physical adversarial
attacks and grouped them. Then, we discuss the existing physical attacks and
focus on the technique for improving the robustness of physical attacks under
complex physical environmental conditions. Finally, we discuss the issues of
the current physical adversarial attacks to be solved and give promising
directions.
Related papers
- Attack Anything: Blind DNNs via Universal Background Adversarial Attack [17.73886733971713]
It has been widely substantiated that deep neural networks (DNNs) are susceptible and vulnerable to adversarial perturbations.
We propose a background adversarial attack framework to attack anything, by which the attack efficacy generalizes well between diverse objects, models, and tasks.
We conduct comprehensive and rigorous experiments in both digital and physical domains across various objects, models, and tasks, demonstrating the effectiveness of attacking anything of the proposed method.
arXiv Detail & Related papers (2024-08-17T12:46:53Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - State-of-the-art optical-based physical adversarial attacks for deep
learning computer vision systems [3.3470481105928216]
Adversarial attacks can mislead deep learning models to make false predictions by implanting small perturbations to the original input that are imperceptible to the human eye.
Physical adversarial attacks, which is more realistic, as the perturbation is introduced to the input before it is being captured and converted to a binary image.
This paper focuses on optical-based physical adversarial attack techniques for computer vision systems.
arXiv Detail & Related papers (2023-03-22T01:14:52Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Visually Adversarial Attacks and Defenses in the Physical World: A
Survey [27.40548512511512]
The current adversarial attacks in computer vision can be divided into digital attacks and physical attacks according to their different attack forms.
In this paper, we summarize a survey versus the current physically adversarial attacks and physically adversarial defenses in computer vision.
arXiv Detail & Related papers (2022-11-03T09:28:45Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Real-World Adversarial Examples involving Makeup Application [58.731070632586594]
We propose a physical adversarial attack with the use of full-face makeup.
Our attack can effectively overcome manual errors in makeup application, such as color and position-related errors.
arXiv Detail & Related papers (2021-09-04T05:29:28Z) - Spatiotemporal Attacks for Embodied Agents [119.43832001301041]
We take the first step to study adversarial attacks for embodied agents.
In particular, we generate adversarial examples, which exploit the interaction history in both the temporal and spatial dimensions.
Our perturbations have strong attack and generalization abilities.
arXiv Detail & Related papers (2020-05-19T01:38:47Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.