Adversarial Examples in the Physical World: A Survey
- URL: http://arxiv.org/abs/2311.01473v2
- Date: Fri, 19 Jul 2024 04:06:19 GMT
- Title: Adversarial Examples in the Physical World: A Survey
- Authors: Jiakai Wang, Xianglong Liu, Jin Hu, Donghua Wang, Siyang Wu, Tingsong Jiang, Yuanfang Guo, Aishan Liu, Aishan Liu, Jiantao Zhou,
- Abstract summary: Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial examples, raising broad security concerns.
Physical adversarial examples (PAEs) present significant challenges and safety concerns.
We provide a comprehensive analysis and classification framework for PAEs based on their specific characteristics.
We aim to provide a fresh, thorough, and systematic understanding of PAEs, thereby promoting the development of robust adversarial learning.
- Score: 45.71213243349657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deep neural networks (DNNs) have demonstrated high vulnerability to adversarial examples, raising broad security concerns about their applications. Besides the attacks in the digital world, the practical implications of adversarial examples in the physical world present significant challenges and safety concerns. However, current research on physical adversarial examples (PAEs) lacks a comprehensive understanding of their unique characteristics, leading to limited significance and understanding. In this paper, we address this gap by thoroughly examining the characteristics of PAEs within a practical workflow encompassing training, manufacturing, and re-sampling processes. By analyzing the links between physical adversarial attacks, we identify manufacturing and re-sampling as the primary sources of distinct attributes and particularities in PAEs. Leveraging this knowledge, we develop a comprehensive analysis and classification framework for PAEs based on their specific characteristics, covering over 100 studies on physical-world adversarial examples. Furthermore, we investigate defense strategies against PAEs and identify open challenges and opportunities for future research. We aim to provide a fresh, thorough, and systematic understanding of PAEs, thereby promoting the development of robust adversarial learning and its application in open-world scenarios to provide the community with a continuously updated list of physical world adversarial sample resources, including papers, code, \etc, within the proposed framework
Related papers
- Anomaly Detection and Generation with Diffusion Models: A Survey [51.61574868316922]
Anomaly detection (AD) plays a pivotal role across diverse domains, including cybersecurity, finance, healthcare, and industrial manufacturing.<n>Recent advancements in deep learning, specifically diffusion models (DMs), have sparked significant interest.<n>This survey aims to guide researchers and practitioners in leveraging DMs for innovative AD solutions across diverse applications.
arXiv Detail & Related papers (2025-06-11T03:29:18Z) - Toward Spiking Neural Network Local Learning Modules Resistant to Adversarial Attacks [2.3312335998006306]
Recent research has shown the vulnerability of Spiking Neural Networks (SNNs) under adversarial examples.
We introduce a hybrid adversarial attack paradigm that leverages the transferability of adversarial instances.
The proposed hybrid approach demonstrates superior performance, outperforming existing adversarial attack methods.
arXiv Detail & Related papers (2025-04-11T18:07:59Z) - A Survey of Model Extraction Attacks and Defenses in Distributed Computing Environments [55.60375624503877]
Model Extraction Attacks (MEAs) threaten modern machine learning systems by enabling adversaries to steal models, exposing intellectual property and training data.
This survey is motivated by the urgent need to understand how the unique characteristics of cloud, edge, and federated deployments shape attack vectors and defense requirements.
We systematically examine the evolution of attack methodologies and defense mechanisms across these environments, demonstrating how environmental factors influence security strategies in critical sectors such as autonomous vehicles, healthcare, and financial services.
arXiv Detail & Related papers (2025-02-22T03:46:50Z) - Comprehensive Survey on Adversarial Examples in Cybersecurity: Impacts, Challenges, and Mitigation Strategies [4.606106768645647]
Ad adversarial examples (AE) pose a critical challenge to the robustness and reliability of deep learning-based systems.
This paper provides a comprehensive review of the impact of AE attacks on key cybersecurity applications.
We explore recent advancements in defense mechanisms, including gradient masking, adversarial training, and detection techniques.
arXiv Detail & Related papers (2024-12-16T01:54:07Z) - Membership Inference Attacks and Defenses in Federated Learning: A Survey [25.581491871183815]
Federated learning is a decentralized machine learning approach where clients train models locally and share model updates to develop a global model.
One of the key threats is membership inference attacks, which target clients' privacy by determining whether a specific example is part of the training set.
These attacks can compromise sensitive information in real-world applications, such as medical diagnoses within a healthcare system.
arXiv Detail & Related papers (2024-12-09T02:39:58Z) - Attack Atlas: A Practitioner's Perspective on Challenges and Pitfalls in Red Teaming GenAI [52.138044013005]
generative AI, particularly large language models (LLMs), become increasingly integrated into production applications.
New attack surfaces and vulnerabilities emerge and put a focus on adversarial threats in natural language and multi-modal systems.
Red-teaming has gained importance in proactively identifying weaknesses in these systems, while blue-teaming works to protect against such adversarial attacks.
This work aims to bridge the gap between academic insights and practical security measures for the protection of generative AI systems.
arXiv Detail & Related papers (2024-09-23T10:18:10Z) - Towards Few-Shot Learning in the Open World: A Review and Beyond [52.41344813375177]
Few-shot learning aims to mimic human intelligence by enabling significant generalizations and transferability.
This paper presents a review of recent advancements designed to adapt FSL for use in open-world settings.
We categorize existing methods into three distinct types of open-world few-shot learning: those involving varying instances, varying classes, and varying distributions.
arXiv Detail & Related papers (2024-08-19T06:23:21Z) - A Survey of Defenses against AI-generated Visual Media: Detection, Disruption, and Authentication [15.879482578829489]
Deep generative models have demonstrated impressive performance in various computer vision applications.
These models may be used for malicious purposes, such as misinformation, deception, and copyright violation.
This paper provides a systematic and timely review of research efforts on defenses against AI-generated visual media.
arXiv Detail & Related papers (2024-07-15T09:46:02Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Survey of Vulnerabilities in Large Language Models Revealed by
Adversarial Attacks [5.860289498416911]
Large Language Models (LLMs) are swiftly advancing in architecture and capability.
As they integrate more deeply into complex systems, the urgency to scrutinize their security properties grows.
This paper surveys research in the emerging interdisciplinary field of adversarial attacks on LLMs.
arXiv Detail & Related papers (2023-10-16T21:37:24Z) - Physical Adversarial Attacks For Camera-based Smart Systems: Current
Trends, Categorization, Applications, Research Challenges, and Future Outlook [2.1771693754641013]
We aim to provide a thorough understanding of the concept of physical adversarial attacks, analyzing their key characteristics and distinguishing features.
Our article delves into various physical adversarial attack methods, categorized according to their target tasks in different applications.
We assess the performance of these attack methods in terms of their effectiveness, stealthiness, and robustness.
arXiv Detail & Related papers (2023-08-11T15:02:19Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [55.38113802311365]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - SoK: Certified Robustness for Deep Neural Networks [13.10665264010575]
Recent studies have shown that deep neural networks (DNNs) are vulnerable to adversarial attacks.
In this paper, we systematize certifiably robust approaches and related practical and theoretical implications.
We also provide the first comprehensive benchmark on existing robustness verification and training approaches on different datasets.
arXiv Detail & Related papers (2020-09-09T07:00:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.