Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
- URL: http://arxiv.org/abs/2006.08476v2
- Date: Sun, 21 Feb 2021 17:15:51 GMT
- Title: Improving Adversarial Robustness via Unlabeled Out-of-Domain Data
- Authors: Zhun Deng, Linjun Zhang, Amirata Ghorbani, James Zou
- Abstract summary: We investigate how adversarial robustness can be enhanced by leveraging out-of-domain unlabeled data.
We show settings where we achieve better adversarial robustness when the unlabeled data come from a shifted domain rather than the same domain as the labeled data.
- Score: 30.58040078862511
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation by incorporating cheap unlabeled data from multiple domains
is a powerful way to improve prediction especially when there is limited
labeled data. In this work, we investigate how adversarial robustness can be
enhanced by leveraging out-of-domain unlabeled data. We demonstrate that for
broad classes of distributions and classifiers, there exists a sample
complexity gap between standard and robust classification. We quantify to what
degree this gap can be bridged via leveraging unlabeled samples from a shifted
domain by providing both upper and lower bounds. Moreover, we show settings
where we achieve better adversarial robustness when the unlabeled data come
from a shifted domain rather than the same domain as the labeled data. We also
investigate how to leverage out-of-domain data when some structural
information, such as sparsity, is shared between labeled and unlabeled domains.
Experimentally, we augment two object recognition datasets (CIFAR-10 and SVHN)
with easy to obtain and unlabeled out-of-domain data and demonstrate
substantial improvement in the model's robustness against $\ell_\infty$
adversarial attacks on the original domain.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Domain Transformer: Predicting Samples of Unseen, Future Domains [1.7310589008573272]
We learn a domain transformer in an unsupervised manner that allows generating data of unseen domains.
Our approach first matches independently learned latent representations of two given domains obtained from an auto-encoder using a Cycle-GAN.
In turn, a transformation of the original samples can be learned that can be applied iteratively to extrapolate to unseen domains.
arXiv Detail & Related papers (2021-06-10T21:20:00Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Robust wav2vec 2.0: Analyzing Domain Shift in Self-Supervised
Pre-Training [67.71228426496013]
We show that using target domain data during pre-training leads to large performance improvements across a variety of setups.
We find that pre-training on multiple domains improves performance generalization on domains not seen during training.
arXiv Detail & Related papers (2021-04-02T12:53:15Z) - Weak Adaptation Learning -- Addressing Cross-domain Data Insufficiency
with Weak Annotator [2.8672054847109134]
In some target problem domains, there are not many data samples available, which could hinder the learning process.
We propose a weak adaptation learning (WAL) approach that leverages unlabeled data from a similar source domain.
Our experiments demonstrate the effectiveness of our approach in learning an accurate classifier with limited labeled data in the target domain.
arXiv Detail & Related papers (2021-02-15T06:19:25Z) - A Free Lunch for Unsupervised Domain Adaptive Object Detection without
Source Data [69.091485888121]
Unsupervised domain adaptation assumes that source and target domain data are freely available and usually trained together to reduce the domain gap.
We propose a source data-free domain adaptive object detection (SFOD) framework via modeling it into a problem of learning with noisy labels.
arXiv Detail & Related papers (2020-12-10T01:42:35Z) - DACS: Domain Adaptation via Cross-domain Mixed Sampling [4.205692673448206]
Unsupervised domain adaptation attempts to train on labelled data from one domain, and simultaneously learn from unlabelled data in the domain of interest.
We propose DACS: Domain Adaptation via Cross-domain mixed Sampling, which mixes images from the two domains along with the corresponding labels and pseudo-labels.
We demonstrate the effectiveness of our solution by achieving state-of-the-art results for GTA5 to Cityscapes.
arXiv Detail & Related papers (2020-07-17T00:43:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.