Recent Advances in Adversarial Training for Adversarial Robustness
- URL: http://arxiv.org/abs/2102.01356v1
- Date: Tue, 2 Feb 2021 07:10:22 GMT
- Title: Recent Advances in Adversarial Training for Adversarial Robustness
- Authors: Tao Bai, Jinqi Luo, Jun Zhao, Bihan Wen
- Abstract summary: Adversarial examples for fooling deep learning models have been studied for several years and are still a hot topic.
Adversarial training also receives enormous attention because of its effectiveness in defending adversarial examples.
Many new theories and understandings of adversarial training have been proposed.
- Score: 22.436303311891276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial examples for fooling deep learning models have been studied for
several years and are still a hot topic. Adversarial training also receives
enormous attention because of its effectiveness in defending adversarial
examples. However, adversarial training is not perfect, many questions of which
remain to solve. During the last few years, researchers in this community have
studied and discussed adversarial training from various aspects. Many new
theories and understandings of adversarial training have been proposed. In this
survey, we systematically review the recent progress on adversarial training
for the first time, categorized by different improvements. Then we discuss the
generalization problems in adversarial training from three perspectives.
Finally, we highlight the challenges which are not fully solved and present
potential future directions.
Related papers
- Adversarial Training: A Survey [130.89534734092388]
Adversarial training (AT) refers to integrating adversarial examples into the training process.
Recent studies have demonstrated the effectiveness of AT in improving the robustness of deep neural networks against diverse adversarial attacks.
arXiv Detail & Related papers (2024-10-19T08:57:35Z) - Adversarial Attacks and Defenses on 3D Point Cloud Classification: A
Survey [28.21038594191455]
Despite remarkable achievements, deep learning algorithms are vulnerable to adversarial attacks.
This paper first introduces the principles and characteristics of adversarial attacks and summarizes and analyzes adversarial example generation methods.
It also provides an overview of defense strategies, organized into data-focused and model-focused methods.
arXiv Detail & Related papers (2023-07-01T11:46:36Z) - Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present
and Future [132.34745793391303]
We review adversarial pretraining of self-supervised deep networks including both convolutional neural networks and vision transformers.
To incorporate adversaries into pretraining models on either input or feature level, we find that existing approaches are largely categorized into two groups.
arXiv Detail & Related papers (2022-10-23T13:14:06Z) - Collaborative Adversarial Training [82.25340762659991]
We show that some collaborative examples, nearly perceptually indistinguishable from both adversarial and benign examples, can be utilized to enhance adversarial training.
A novel method called collaborative adversarial training (CoAT) is thus proposed to achieve new state-of-the-arts.
arXiv Detail & Related papers (2022-05-23T09:41:41Z) - A Survey of Robust Adversarial Training in Pattern Recognition:
Fundamental, Theory, and Methodologies [26.544748192629367]
Recent studies show that neural networks may be easily fooled by certain imperceptibly perturbed input samples called adversarial examples.
Such security vulnerability has resulted in a large body of research in recent years because real-world threats could be introduced due to vast applications of neural networks.
To address the robustness issue to adversarial examples particularly in pattern recognition, robust adversarial training has become one mainstream.
arXiv Detail & Related papers (2022-03-26T11:00:25Z) - On the Impact of Hard Adversarial Instances on Overfitting in
Adversarial Training [72.95029777394186]
Adversarial training is a popular method to robustify models against adversarial attacks.
We investigate this phenomenon from the perspective of training instances.
We show that the decay in generalization performance of adversarial training is a result of the model's attempt to fit hard adversarial instances.
arXiv Detail & Related papers (2021-12-14T12:19:24Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Towards Understanding Fast Adversarial Training [91.8060431517248]
We conduct experiments to understand the behavior of fast adversarial training.
We show the key to its success is the ability to recover from overfitting to weak attacks.
arXiv Detail & Related papers (2020-06-04T18:19:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.