Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present
and Future
- URL: http://arxiv.org/abs/2210.13463v1
- Date: Sun, 23 Oct 2022 13:14:06 GMT
- Title: Adversarial Pretraining of Self-Supervised Deep Networks: Past, Present
and Future
- Authors: Guo-Jun Qi and Mubarak Shah
- Abstract summary: We review adversarial pretraining of self-supervised deep networks including both convolutional neural networks and vision transformers.
To incorporate adversaries into pretraining models on either input or feature level, we find that existing approaches are largely categorized into two groups.
- Score: 132.34745793391303
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: In this paper, we review adversarial pretraining of self-supervised deep
networks including both convolutional neural networks and vision transformers.
Unlike the adversarial training with access to labeled examples, adversarial
pretraining is complicated as it only has access to unlabeled examples. To
incorporate adversaries into pretraining models on either input or feature
level, we find that existing approaches are largely categorized into two
groups: memory-free instance-wise attacks imposing worst-case perturbations on
individual examples, and memory-based adversaries shared across examples over
iterations. In particular, we review several representative adversarial
pretraining models based on Contrastive Learning (CL) and Masked Image Modeling
(MIM), respectively, two popular self-supervised pretraining methods in
literature. We also review miscellaneous issues about computing overheads,
input-/feature-level adversaries, as well as other adversarial pretraining
approaches beyond the above two groups. Finally, we discuss emerging trends and
future directions about the relations between adversarial and cooperative
pretraining, unifying adversarial CL and MIM pretraining, and the trade-off
between accuracy and robustness in adversarial pretraining.
Related papers
- Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding [0.20718016474717196]
An adversarial example is a modified input image designed to cause a Machine Learning (ML) model to make a mistake.
This study presents a practical and effective solution -- using predictive coding networks (PCnets) as an auxiliary step for adversarial defence.
arXiv Detail & Related papers (2024-10-31T21:38:05Z) - Fast Propagation is Better: Accelerating Single-Step Adversarial
Training via Sampling Subnetworks [69.54774045493227]
A drawback of adversarial training is the computational overhead introduced by the generation of adversarial examples.
We propose to exploit the interior building blocks of the model to improve efficiency.
Compared with previous methods, our method not only reduces the training cost but also achieves better model robustness.
arXiv Detail & Related papers (2023-10-24T01:36:20Z) - Latent Boundary-guided Adversarial Training [61.43040235982727]
Adrial training is proved to be the most effective strategy that injects adversarial examples into model training.
We propose a novel adversarial training framework called LAtent bounDary-guided aDvErsarial tRaining.
arXiv Detail & Related papers (2022-06-08T07:40:55Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Adversarial Momentum-Contrastive Pre-Training [20.336258934272102]
Adversarial self-supervised pre-training is helpful to extract the invariant representations under both data augmentations and adversarial perturbations.
This paper proposes a novel adversarial momentum-contrastive (AMOC) pre-training approach.
Compared with the existing self-supervised pre-training approaches, AMOC can use a smaller batch size and fewer training epochs but learn more robust features.
arXiv Detail & Related papers (2020-12-24T07:49:10Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Semantics-Preserving Adversarial Training [12.242659601882147]
Adversarial training is a technique that improves adversarial robustness of a deep neural network (DNN) by including adversarial examples in the training data.
We propose semantics-preserving adversarial training (SPAT) which encourages perturbation on the pixels that are shared among all classes.
Experiment results show that SPAT improves adversarial robustness and achieves state-of-the-art results in CIFAR-10 and CIFAR-100.
arXiv Detail & Related papers (2020-09-23T07:42:14Z) - REGroup: Rank-aggregating Ensemble of Generative Classifiers for Robust
Predictions [6.0162772063289784]
Defense strategies that adopt adversarial training or random input transformations typically require retraining or fine-tuning the model to achieve reasonable performance.
We find that we can learn a generative classifier by statistically characterizing the neural response of an intermediate layer to clean training samples.
Our proposed approach uses a subset of the clean training data and a pre-trained model, and yet is agnostic to network architectures or the adversarial attack generation method.
arXiv Detail & Related papers (2020-06-18T17:07:19Z) - Class-Aware Domain Adaptation for Improving Adversarial Robustness [27.24720754239852]
adversarial training has been proposed to train networks by injecting adversarial examples into the training data.
We propose a novel Class-Aware Domain Adaptation (CADA) method for adversarial defense without directly applying adversarial training.
arXiv Detail & Related papers (2020-05-10T03:45:19Z) - Dynamic Divide-and-Conquer Adversarial Training for Robust Semantic
Segmentation [79.42338812621874]
Adversarial training is promising for improving robustness of deep neural networks towards adversarial perturbations.
We formulate a general adversarial training procedure that can perform decently on both adversarial and clean samples.
We propose a dynamic divide-and-conquer adversarial training (DDC-AT) strategy to enhance the defense effect.
arXiv Detail & Related papers (2020-03-14T05:06:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.