Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook
- URL: http://arxiv.org/abs/2308.06173v1
- Date: Fri, 11 Aug 2023 15:02:19 GMT
- Title: Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook
- Authors: Amira Guesmi, Muhammad Abdullah Hanif, Bassem Ouni, and Muhammed
Shafique
- Abstract summary: We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
- Score: 2.1771693754641013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we present a comprehensive survey of the current trends
focusing specifically on physical adversarial attacks. We aim to provide a
thorough understanding of the concept of physical adversarial attacks,
analyzing their key characteristics and distinguishing features. Furthermore,
we explore the specific requirements and challenges associated with executing
attacks in the physical world. Our article delves into various physical
adversarial attack methods, categorized according to their target tasks in
different applications, including classification, detection, face recognition,
semantic segmentation and depth estimation. We assess the performance of these
attack methods in terms of their effectiveness, stealthiness, and robustness.
We examine how each technique strives to ensure the successful manipulation of
DNNs while mitigating the risk of detection and withstanding real-world
distortions. Lastly, we discuss the current challenges and outline potential
future research directions in the field of physical adversarial attacks. We
highlight the need for enhanced defense mechanisms, the exploration of novel
attack strategies, the evaluation of attacks in different application domains,
and the establishment of standardized benchmarks and evaluation criteria for
physical adversarial attacks. Through this comprehensive survey, we aim to
provide a valuable resource for researchers, practitioners, and policymakers to
gain a holistic understanding of physical adversarial attacks in computer
vision and facilitate the development of robust and secure DNN-based systems.
Related papers
- A Survey on Physical Adversarial Attacks against Face Recognition Systems [12.056482296260095]
Face Recognition technology is increasingly prevalent in finance, the military, public safety, and everyday life.
Physical adversarial attacks targeting FR systems in real-world settings have attracted considerable research interest.
arXiv Detail & Related papers (2024-10-10T06:21:44Z) - A Survey and Evaluation of Adversarial Attacks for Object Detection [11.48212060875543]
Deep learning models excel in various computer vision tasks but are susceptible to adversarial examples-subtle perturbations in input data that lead to incorrect predictions.
This vulnerability poses significant risks in safety-critical applications such as autonomous vehicles, security surveillance, and aircraft health monitoring.
arXiv Detail & Related papers (2024-08-04T05:22:08Z) - Attention-Based Real-Time Defenses for Physical Adversarial Attacks in
Vision Applications [58.06882713631082]
Deep neural networks exhibit excellent performance in computer vision tasks, but their vulnerability to real-world adversarial attacks raises serious security concerns.
This paper proposes an efficient attention-based defense mechanism that exploits adversarial channel-attention to quickly identify and track malicious objects in shallow network layers.
It also introduces an efficient multi-frame defense framework, validating its efficacy through extensive experiments aimed at evaluating both defense performance and computational cost.
arXiv Detail & Related papers (2023-11-19T00:47:17Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - Physical Adversarial Attacks for Surveillance: A Survey [40.81031907691243]
This paper reviews recent attempts and findings in learning and designing physical adversarial attacks for surveillance applications.
In particular, we propose a framework to analyze physical adversarial attacks and provide a comprehensive survey of physical adversarial attacks on four key surveillance tasks.
The insights in this paper present an important step in building resilience within surveillance systems to physical adversarial attacks.
arXiv Detail & Related papers (2023-05-01T20:19:59Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.