Adversarial Self-Supervised Contrastive Learning
- URL: http://arxiv.org/abs/2006.07589v2
- Date: Mon, 26 Oct 2020 14:04:48 GMT
- Title: Adversarial Self-Supervised Contrastive Learning
- Authors: Minseon Kim, Jihoon Tack, Sung Ju Hwang
- Abstract summary: Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
- Score: 62.17538130778111
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing adversarial learning approaches mostly use class labels to generate
adversarial samples that lead to incorrect predictions, which are then used to
augment the training of the model for improved robustness. While some recent
works propose semi-supervised adversarial learning methods that utilize
unlabeled data, they still require class labels. However, do we really need
class labels at all, for adversarially robust training of deep neural networks?
In this paper, we propose a novel adversarial attack for unlabeled data, which
makes the model confuse the instance-level identities of the perturbed data
samples. Further, we present a self-supervised contrastive learning framework
to adversarially train a robust neural network without labeled data, which aims
to maximize the similarity between a random augmentation of a data sample and
its instance-wise adversarial perturbation. We validate our method, Robust
Contrastive Learning (RoCL), on multiple benchmark datasets, on which it
obtains comparable robust accuracy over state-of-the-art supervised adversarial
learning methods, and significantly improved robustness against the black box
and unseen types of attacks. Moreover, with further joint fine-tuning with
supervised adversarial loss, RoCL obtains even higher robust accuracy over
using self-supervised learning alone. Notably, RoCL also demonstrate impressive
results in robust transfer learning.
Related papers
- MOREL: Enhancing Adversarial Robustness through Multi-Objective Representation Learning [1.534667887016089]
deep neural networks (DNNs) are vulnerable to slight adversarial perturbations.
We show that strong feature representation learning during training can significantly enhance the original model's robustness.
We propose MOREL, a multi-objective feature representation learning approach, encouraging classification models to produce similar features for inputs within the same class, despite perturbations.
arXiv Detail & Related papers (2024-10-02T16:05:03Z) - Effective and Robust Adversarial Training against Data and Label Corruptions [35.53386268796071]
Corruptions due to data perturbations and label noise are prevalent in the datasets from unreliable sources.
We develop an Effective and Robust Adversarial Training framework to simultaneously handle two types of corruption.
arXiv Detail & Related papers (2024-05-07T10:53:20Z) - Noisy Correspondence Learning with Self-Reinforcing Errors Mitigation [63.180725016463974]
Cross-modal retrieval relies on well-matched large-scale datasets that are laborious in practice.
We introduce a novel noisy correspondence learning framework, namely textbfSelf-textbfReinforcing textbfErrors textbfMitigation (SREM)
arXiv Detail & Related papers (2023-12-27T09:03:43Z) - Advancing Adversarial Robustness Through Adversarial Logit Update [10.041289551532804]
Adversarial training and adversarial purification are among the most widely recognized defense strategies.
We propose a new principle, namely Adversarial Logit Update (ALU), to infer adversarial sample's labels.
Our solution achieves superior performance compared to state-of-the-art methods against a wide range of adversarial attacks.
arXiv Detail & Related papers (2023-08-29T07:13:31Z) - PointACL:Adversarial Contrastive Learning for Robust Point Clouds
Representation under Adversarial Attack [73.3371797787823]
Adversarial contrastive learning (ACL) is considered an effective way to improve the robustness of pre-trained models.
We present our robust aware loss function to train self-supervised contrastive learning framework adversarially.
We validate our method, PointACL on downstream tasks, including 3D classification and 3D segmentation with multiple datasets.
arXiv Detail & Related papers (2022-09-14T22:58:31Z) - Robustness through Cognitive Dissociation Mitigation in Contrastive
Adversarial Training [2.538209532048867]
We introduce a novel neural network training framework that increases model's adversarial robustness to adversarial attacks.
We propose to improve model robustness to adversarial attacks by learning feature representations consistent under both data augmentations and adversarial perturbations.
We validate our method on the CIFAR-10 dataset on which it outperforms both robust accuracy and clean accuracy over alternative supervised and self-supervised adversarial learning methods.
arXiv Detail & Related papers (2022-03-16T21:41:27Z) - Modelling Adversarial Noise for Adversarial Defense [96.56200586800219]
adversarial defenses typically focus on exploiting adversarial examples to remove adversarial noise or train an adversarially robust target model.
Motivated by that the relationship between adversarial data and natural data can help infer clean data from adversarial data to obtain the final correct prediction.
We study to model adversarial noise to learn the transition relationship in the label space for using adversarial labels to improve adversarial accuracy.
arXiv Detail & Related papers (2021-09-21T01:13:26Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Stylized Adversarial Defense [105.88250594033053]
adversarial training creates perturbation patterns and includes them in the training set to robustify the model.
We propose to exploit additional information from the feature space to craft stronger adversaries.
Our adversarial training approach demonstrates strong robustness compared to state-of-the-art defenses.
arXiv Detail & Related papers (2020-07-29T08:38:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.