The Vulnerability of the Neural Networks Against Adversarial Examples in
Deep Learning Algorithms
- URL: http://arxiv.org/abs/2011.05976v2
- Date: Tue, 17 Nov 2020 12:57:38 GMT
- Title: The Vulnerability of the Neural Networks Against Adversarial Examples in
Deep Learning Algorithms
- Authors: Rui Zhao
- Abstract summary: This paper introduces the problem of adversarial examples in deep learning, sorts out the existing attack and defense methods of the black box and white box, and classifies them.
It briefly describes the application of some adversarial examples in different scenarios in recent years, compares several defense technologies of adversarial examples, and finally summarizes the problems in this research field and prospects for its future development.
- Score: 8.662390869320323
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: With further development in the fields of computer vision, network security,
natural language processing and so on so forth, deep learning technology
gradually exposed certain security risks. The existing deep learning algorithms
cannot effectively describe the essential characteristics of data, making the
algorithm unable to give the correct result in the face of malicious input.
Based on current security threats faced by deep learning, this paper introduces
the problem of adversarial examples in deep learning, sorts out the existing
attack and defense methods of the black box and white box, and classifies them.
It briefly describes the application of some adversarial examples in different
scenarios in recent years, compares several defense technologies of adversarial
examples, and finally summarizes the problems in this research field and
prospects for its future development. This paper introduces the common white
box attack methods in detail, and further compares the similarities and
differences between the attack of the black and white box. Correspondingly, the
author also introduces the defense methods, and analyzes the performance of
these methods against the black and white box attack.
Related papers
- Prompt Injection Attacks in Defended Systems [0.0]
Black-box attacks can embed hidden malicious features into large language models.
This paper investigates methods for black-box attacks on large language models with a three-tiered defense mechanism.
arXiv Detail & Related papers (2024-06-20T07:13:25Z) - Topological safeguard for evasion attack interpreting the neural
networks' behavior [0.0]
In this work, a novel detector of evasion attacks is developed.
It focuses on the information of the activations of the neurons given by the model when an input sample is injected.
For this purpose, a huge data preprocessing is required to introduce all this information in the detector.
arXiv Detail & Related papers (2024-02-12T08:39:40Z) - Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - How Deep Learning Sees the World: A Survey on Adversarial Attacks &
Defenses [0.0]
This paper compiles the most recent adversarial attacks, grouped by the attacker capacity, and modern defenses clustered by protection strategies.
We also present the new advances regarding Vision Transformers, summarize the datasets and metrics used in the context of adversarial settings, and compare the state-of-the-art results under different attacks, finishing with the identification of open issues.
arXiv Detail & Related papers (2023-05-18T10:33:28Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - A Review of Adversarial Attack and Defense for Classification Methods [78.50824774203495]
This paper focuses on the generation and guarding of adversarial examples.
It is the hope of the authors that this paper will encourage more statisticians to work on this important and exciting field of generating and defending against adversarial examples.
arXiv Detail & Related papers (2021-11-18T22:13:43Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - MixNet for Generalized Face Presentation Attack Detection [63.35297510471997]
We have proposed a deep learning-based network termed as textitMixNet to detect presentation attacks.
The proposed algorithm utilizes state-of-the-art convolutional neural network architectures and learns the feature mapping for each attack category.
arXiv Detail & Related papers (2020-10-25T23:01:13Z) - Adversarial Machine Learning Attacks and Defense Methods in the Cyber
Security Domain [58.30296637276011]
This paper summarizes the latest research on adversarial attacks against security solutions based on machine learning techniques.
It is the first to discuss the unique challenges of implementing end-to-end adversarial attacks in the cyber security domain.
arXiv Detail & Related papers (2020-07-05T18:22:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.