Robustness of Unsupervised Representation Learning without Labels
- URL: http://arxiv.org/abs/2210.04076v1
- Date: Sat, 8 Oct 2022 18:03:28 GMT
- Title: Robustness of Unsupervised Representation Learning without Labels
- Authors: Aleksandar Petrov and Marta Kwiatkowska
- Abstract summary: We propose a family of unsupervised robustness measures, which are model- and task-agnostic and label-free.
We validate our results against a linear probe and show that, for MOCOv2, adversarial training results in 3 times higher certified accuracy.
- Score: 92.90480374344777
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised representation learning leverages large unlabeled datasets and
is competitive with supervised learning. But non-robust encoders may affect
downstream task robustness. Recently, robust representation encoders have
become of interest. Still, all prior work evaluates robustness using a
downstream classification task. Instead, we propose a family of unsupervised
robustness measures, which are model- and task-agnostic and label-free. We
benchmark state-of-the-art representation encoders and show that none dominates
the rest. We offer unsupervised extensions to the FGSM and PGD attacks. When
used in adversarial training, they improve most unsupervised robustness
measures, including certified robustness. We validate our results against a
linear probe and show that, for MOCOv2, adversarial training results in 3 times
higher certified accuracy, a 2-fold decrease in impersonation attack success
rate and considerable improvements in certified robustness.
Related papers
- Towards Robust Domain Generation Algorithm Classification [1.4542411354617986]
We implement 32 white-box attacks, 19 of which are very effective and induce a false-negative rate (FNR) of $approx$ 100% on unhardened classifiers.
We propose a novel training scheme that leverages adversarial latent space vectors and discretized adversarial domains to significantly improve robustness.
arXiv Detail & Related papers (2024-04-09T11:56:29Z) - Doubly Robust Instance-Reweighted Adversarial Training [107.40683655362285]
We propose a novel doubly-robust instance reweighted adversarial framework.
Our importance weights are obtained by optimizing the KL-divergence regularized loss function.
Our proposed approach outperforms related state-of-the-art baseline methods in terms of average robust performance.
arXiv Detail & Related papers (2023-08-01T06:16:18Z) - How robust accuracy suffers from certified training with convex
relaxations [12.292092677396347]
Adrial attacks pose significant threats to deploying state-of-the-art classifiers in safety-critical applications.
Two classes of methods have emerged to address this issue: empirical defences and certified defences.
We systematically compare the standard and robust error of these two robust training paradigms across multiple computer vision tasks.
arXiv Detail & Related papers (2023-06-12T09:45:21Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Adversarial Robustness under Long-Tailed Distribution [93.50792075460336]
Adversarial robustness has attracted extensive studies recently by revealing the vulnerability and intrinsic characteristics of deep networks.
In this work we investigate the adversarial vulnerability as well as defense under long-tailed distributions.
We propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant and data re-balancing.
arXiv Detail & Related papers (2021-04-06T17:53:08Z) - Robust Pre-Training by Adversarial Contrastive Learning [120.33706897927391]
Recent work has shown that, when integrated with adversarial training, self-supervised pre-training can lead to state-of-the-art robustness.
We improve robustness-aware self-supervised pre-training by learning representations consistent under both data augmentations and adversarial perturbations.
arXiv Detail & Related papers (2020-10-26T04:44:43Z) - Decoder-free Robustness Disentanglement without (Additional) Supervision [42.066771710455754]
Our proposed Adversarial Asymmetric Training (AAT) algorithm can reliably disentangle robust and non-robust representations without additional supervision on robustness.
Empirical results show our method does not only successfully preserve accuracy by combining two representations, but also achieve much better disentanglement than previous work.
arXiv Detail & Related papers (2020-07-02T19:51:40Z) - Adversarial Self-Supervised Contrastive Learning [62.17538130778111]
Existing adversarial learning approaches mostly use class labels to generate adversarial samples that lead to incorrect predictions.
We propose a novel adversarial attack for unlabeled data, which makes the model confuse the instance-level identities of the perturbed data samples.
We present a self-supervised contrastive learning framework to adversarially train a robust neural network without labeled data.
arXiv Detail & Related papers (2020-06-13T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.