Robust Pre-Training by Adversarial Contrastive Learning
- URL: http://arxiv.org/abs/2010.13337v1
- Date: Mon, 26 Oct 2020 04:44:43 GMT
- Title: Robust Pre-Training by Adversarial Contrastive Learning
- Authors: Ziyu Jiang, Tianlong Chen, Ting Chen, Zhangyang Wang
- Abstract summary: Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
- Score: 120.33706897927391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent work has shown that, when integrated with adversarial training,
self-supervised pre-training can lead to state-of-the-art robustness In this
work, we improve robustness-aware self-supervised pre-training by learning
representations that are consistent under both data augmentations and
adversarial perturbations. Our approach leverages a recent contrastive learning
framework, which learns representations by maximizing feature consistency under
differently augmented views. This fits particularly well with the goal of
adversarial robustness, as one cause of adversarial fragility is the lack of
feature invariance, i.e., small input perturbations can result in undesirable
large changes in features or even predicted labels. We explore various options
to formulate the contrastive task, and demonstrate that by injecting
adversarial perturbations, contrastive pre-training can lead to models that are
both label-efficient and robust. We empirically evaluate the proposed
Adversarial Contrastive Learning (ACL) and show it can consistently outperform
existing methods. For example on the CIFAR-10 dataset, ACL outperforms the
previous state-of-the-art unsupervised robust pre-training approach by 2.99% on
robust accuracy and 2.14% on standard accuracy. We further demonstrate that ACL
pre-training can improve semi-supervised adversarial training, even when only a
few labeled examples are available. Our codes and pre-trained models have been
released at: https://github.com/VITA-Group/Adversarial-Contrastive-Learning.
Related papers
- Class Incremental Learning for Adversarial Robustness [17.06592851567578]
Adrial training integrates adversarial examples during model training to enhance robustness.
We observe that combining incremental learning with naive adversarial training easily leads to a loss of robustness.
We propose the Flatness Preserving Distillation (FPD) loss that leverages the output difference between adversarial and clean examples.
arXiv Detail & Related papers (2023-12-06T04:38:02Z) - TWINS: A Fine-Tuning Framework for Improved Transferability of
Adversarial Robustness and Generalization [89.54947228958494]
This paper focuses on the fine-tuning of an adversarially pre-trained model in various classification tasks.
We propose a novel statistics-based approach, Two-WIng NormliSation (TWINS) fine-tuning framework.
TWINS is shown to be effective on a wide range of image classification datasets in terms of both generalization and robustness.
arXiv Detail & Related papers (2023-03-20T14:12:55Z) - PointACL:Adversarial Contrastive Learning for Robust Point Clouds
Representation under Adversarial Attack [73.3371797787823]
Adversarial contrastive learning (ACL) is considered an effective way to improve the robustness of pre-trained models.
We present our robust aware loss function to train self-supervised contrastive learning framework adversarially.
We validate our method, PointACL on downstream tasks, including 3D classification and 3D segmentation with multiple datasets.
arXiv Detail & Related papers (2022-09-14T22:58:31Z) - Enhancing Adversarial Training with Feature Separability [52.39305978984573]
We introduce a new concept of adversarial training graph (ATG) with which the proposed adversarial training with feature separability (ATFS) enables to boost the intra-class feature similarity and increase inter-class feature variance.
Through comprehensive experiments, we demonstrate that the proposed ATFS framework significantly improves both clean and robust performance.
arXiv Detail & Related papers (2022-05-02T04:04:23Z) - Robustness through Cognitive Dissociation Mitigation in Contrastive
Adversarial Training [2.538209532048867]
We introduce a novel neural network training framework that increases model's adversarial robustness to adversarial attacks.
We propose to improve model robustness to adversarial attacks by learning feature representations consistent under both data augmentations and adversarial perturbations.
We validate our method on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.
arXiv Detail & Related papers (2022-03-16T21:41:27Z) - When Does Contrastive Learning Preserve Adversarial Robustness from
Pretraining to Finetuning? [99.4914671654374]
We propose AdvCL, a novel adversarial contrastive pretraining framework.
We show that AdvCL is able to enhance cross-task robustness transferability without loss of model accuracy and finetuning efficiency.
arXiv Detail & Related papers (2021-11-01T17:59:43Z) - Adversarial Momentum-Contrastive Pre-Training [20.336258934272102]
Adversarial self-supervised pre-training is helpful to extract the invariant representations under both data augmentations and adversarial perturbations.
This paper proposes a novel adversarial momentum-contrastive (AMOC) pre-training approach.
Compared with the existing self-supervised pre-training approaches, AMOC can use a smaller batch size and fewer training epochs but learn more robust features.
arXiv Detail & Related papers (2020-12-24T07:49:10Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.