How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses
- URL: http://arxiv.org/abs/2305.10862v1
- Date: Thu, 18 May 2023 10:33:28 GMT
- Title: How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses
- Authors: Joana C. Costa and Tiago Roxo and Hugo Proen\c{c}a and Pedro R. M.
In\'acio
- Abstract summary: This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep Learning is currently used to perform multiple tasks, such as object
recognition, face recognition, and natural language processing. However, Deep
Neural Networks (DNNs) are vulnerable to perturbations that alter the network
prediction (adversarial examples), raising concerns regarding its usage in
critical areas, such as self-driving vehicles, malware detection, and
healthcare. This paper compiles the most recent adversarial attacks, grouped by
the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the
datasets and metrics used in the context of adversarial settings, and compare
the state-of-the-art results under different attacks, finishing with the
identification of open issues.
Related papers
- Adversarial Attacks of Vision Tasks in the Past 10 Years: A Survey [21.4046846701173]
Adversarial attacks pose significant security threats during machine learning inference.
Existing reviews often focus on attack classifications and lack comprehensive, in-depth analysis.
This article addresses these gaps by offering a thorough summary of traditional and LVLM adversarial attacks.
arXiv Detail & Related papers (2024-10-31T07:22:51Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - Investigating Human-Identifiable Features Hidden in Adversarial
Perturbations [54.39726653562144]
Our study explores up to five attack algorithms across three datasets.
We identify human-identifiable features in adversarial perturbations.
Using pixel-level annotations, we extract such features and demonstrate their ability to compromise target models.
arXiv Detail & Related papers (2023-09-28T22:31:29Z) - A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Deviations in Representations Induced by Adversarial Attacks [0.0]
Research has shown that deep learning models are vulnerable to adversarial attacks.
This finding brought about a new direction in research, whereby algorithms were developed to attack and defend vulnerable networks.
We present a method for measuring and analyzing the deviations in representations induced by adversarial attacks.
arXiv Detail & Related papers (2022-11-07T17:40:08Z) - Physical Adversarial Attack meets Computer Vision: A Decade Survey [57.46379460600939]
This paper presents a comprehensive overview of physical adversarial attacks.
We take the first step to systematically evaluate the performance of physical adversarial attacks.
Our proposed evaluation metric, hiPAA, comprises six perspectives.
arXiv Detail & Related papers (2022-09-30T01:59:53Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Adversarial Machine Learning for Cybersecurity and Computer Vision:
Current Developments and Challenges [2.132096006921048]
Research in adversarial machine learning addresses a significant threat to the wide application of machine learning techniques.
We first discuss three main categories of attacks against machine learning techniques -- poisoning attacks, evasion attacks, and privacy attacks.
We notice adversarial samples in cybersecurity and computer vision are fundamentally different.
arXiv Detail & Related papers (2021-06-30T03:05:58Z) - Explainable Adversarial Attacks in Deep Neural Networks Using Activation
Profiles [69.9674326582747]
This paper presents a visual framework to investigate neural network models subjected to adversarial examples.
We show how observing these elements can quickly pinpoint exploited areas in a model.
arXiv Detail & Related papers (2021-03-18T13:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.