A Survey of Robust Adversarial Training in Pattern Recognition:
Fundamental, Theory, and Methodologies
- URL: http://arxiv.org/abs/2203.14046v1
- Date: Sat, 26 Mar 2022 11:00:25 GMT
- Title: A Survey of Robust Adversarial Training in Pattern Recognition:
Fundamental, Theory, and Methodologies
- Authors: Zhuang Qian, Kaizhu Huang, Qiu-Feng Wang, Xu-Yao Zhang
- Abstract summary: Recent studies show that neural networks may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples.
Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to vast applications of neural networks.
To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream.
- Score: 26.544748192629367
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the last a few decades, deep neural networks have achieved remarkable
success in machine learning, computer vision, and pattern recognition. Recent
studies however show that neural networks (both shallow and deep) may be easily
fooled by certain imperceptibly perturbed input samples called adversarial
examples. Such security vulnerability has resulted in a large body of research
in recent years because real-world threats could be introduced due to vast
applications of neural networks. To address the robustness issue to adversarial
examples particularly in pattern recognition, robust adversarial training has
become one mainstream. Various ideas, methods, and applications have boomed in
the field. Yet, a deep understanding of adversarial training including
characteristics, interpretations, theories, and connections among different
models has still remained elusive. In this paper, we present a comprehensive
survey trying to offer a systematic and structured investigation on robust
adversarial training in pattern recognition. We start with fundamentals
including definition, notations, and properties of adversarial examples. We
then introduce a unified theoretical framework for defending against
adversarial samples - robust adversarial training with visualizations and
interpretations on why adversarial training can lead to model robustness.
Connections will be also established between adversarial training and other
traditional learning theories. After that, we summarize, review, and discuss
various methodologies with adversarial attack and defense/training algorithms
in a structured way. Finally, we present analysis, outlook, and remarks of
adversarial training.
Related papers
- Adversarial Training Can Provably Improve Robustness: Theoretical Analysis of Feature Learning Process Under Structured Data [38.44734564565478]
We provide a theoretical understanding of adversarial examples and adversarial training algorithms from the perspective of feature learning theory.
We show that the adversarial training method can provably strengthen the robust feature learning and suppress the non-robust feature learning.
arXiv Detail & Related papers (2024-10-11T03:59:49Z) - A Survey on Transferability of Adversarial Examples across Deep Neural Networks [53.04734042366312]
adversarial examples can manipulate machine learning models into making erroneous predictions.
The transferability of adversarial examples enables black-box attacks which circumvent the need for detailed knowledge of the target model.
This survey explores the landscape of the adversarial transferability of adversarial examples.
arXiv Detail & Related papers (2023-10-26T17:45:26Z) - A reading survey on adversarial machine learning: Adversarial attacks
and their understanding [6.1678491628787455]
Adversarial Machine Learning exploits and understands some of the vulnerabilities that cause the neural networks to misclassify for near original input.
A class of algorithms called adversarial attacks is proposed to make the neural networks misclassify for various tasks in different domains.
This article provides a survey of existing adversarial attacks and their understanding based on different perspectives.
arXiv Detail & Related papers (2023-08-07T07:37:26Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - A Comprehensive Study on Robustness of Image Classification Models:
Benchmarking and Rethinking [54.89987482509155]
robustness of deep neural networks is usually lacking under adversarial examples, common corruptions, and distribution shifts.
We establish a comprehensive benchmark robustness called textbfARES-Bench on the image classification task.
By designing the training settings accordingly, we achieve the new state-of-the-art adversarial robustness.
arXiv Detail & Related papers (2023-02-28T04:26:20Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Searching for an Effective Defender: Benchmarking Defense against
Adversarial Word Substitution [83.84968082791444]
Deep neural networks are vulnerable to intentionally crafted adversarial examples.
Various methods have been proposed to defend against adversarial word-substitution attacks for neural NLP models.
arXiv Detail & Related papers (2021-08-29T08:11:36Z) - When and How to Fool Explainable Models (and Humans) with Adversarial
Examples [1.439518478021091]
We explore the possibilities and limits of adversarial attacks for explainable machine learning models.
First, we extend the notion of adversarial examples to fit in explainable machine learning scenarios.
Next, we propose a comprehensive framework to study whether adversarial examples can be generated for explainable models.
arXiv Detail & Related papers (2021-07-05T11:20:55Z) - Optimism in the Face of Adversity: Understanding and Improving Deep
Learning through Adversarial Robustness [63.627760598441796]
We provide an in-depth review of the field of adversarial robustness in deep learning.
We highlight the intuitive connection between adversarial examples and the geometry of deep neural networks.
We provide an overview of the main emerging applications of adversarial robustness beyond security.
arXiv Detail & Related papers (2020-10-19T16:03:46Z) - Detection Defense Against Adversarial Attacks with Saliency Map [7.736844355705379]
It is well established that neural networks are vulnerable to adversarial examples, which are almost imperceptible on human vision.
Existing defenses are trend to harden the robustness of models against adversarial attacks.
We propose a novel method combined with additional noises and utilize the inconsistency strategy to detect adversarial examples.
arXiv Detail & Related papers (2020-09-06T13:57:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.