Adversarial Robustness for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2109.00946v1
- Date: Thu, 2 Sep 2021 13:45:01 GMT
- Title: Adversarial Robustness for Unsupervised Domain Adaptation
- Authors: Muhammad Awais, Fengwei Zhou, Hang Xu, Lanqing Hong, Ping Luo, Sung-Ho
Bae, Zhenguo Li
- Abstract summary: In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models.
Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training.
- Score: 48.51898925429575
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive Unsupervised Domain Adaptation (UDA) studies have shown great
success in practice by learning transferable representations across a labeled
source domain and an unlabeled target domain with deep models. However,
previous works focus on improving the generalization ability of UDA models on
clean examples without considering the adversarial robustness, which is crucial
in real-world applications. Conventional adversarial training methods are not
suitable for the adversarial robustness on the unlabeled target domain of UDA
since they train models with adversarial examples generated by the supervised
loss function. In this work, we leverage intermediate representations learned
by multiple robust ImageNet models to improve the robustness of UDA models. Our
method works by aligning the features of the UDA model with the robust features
learned by ImageNet pre-trained models along with domain adaptation training.
It utilizes both labeled and unlabeled domains and instills robustness without
any adversarial intervention or label requirement during domain adaptation
training. Experimental results show that our method significantly improves
adversarial robustness compared to the baseline while keeping clean accuracy on
various UDA benchmarks.
Related papers
- Adversarial Robustification via Text-to-Image Diffusion Models [56.37291240867549]
Adrial robustness has been conventionally believed as a challenging property to encode for neural networks.
We develop a scalable and model-agnostic solution to achieve adversarial robustness without using any data.
arXiv Detail & Related papers (2024-07-26T10:49:14Z) - Can We Evaluate Domain Adaptation Models Without Target-Domain Labels? [36.05871459064825]
Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.
In real-world scenarios, the absence of target-domain labels makes it challenging to evaluate the performance of UDA models.
We propose a novel metric called the textitTransfer Score to address these issues.
arXiv Detail & Related papers (2023-05-30T03:36:40Z) - SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation [25.939292305808934]
Unsupervised domain adaptation (UDA) can transfer knowledge learned from rich-label dataset to unlabeled target dataset.
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
arXiv Detail & Related papers (2022-12-12T14:25:40Z) - Domain Adaptation with Adversarial Training on Penultimate Activations [82.9977759320565]
Enhancing model prediction confidence on unlabeled target data is an important objective in Unsupervised Domain Adaptation (UDA)
We show that this strategy is more efficient and better correlated with the objective of boosting prediction confidence than adversarial training on input images or intermediate features.
arXiv Detail & Related papers (2022-08-26T19:50:46Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z) - Knowledge Distillation for BERT Unsupervised Domain Adaptation [2.969705152497174]
A pre-trained language model, BERT, has brought significant performance improvements across a range of natural language processing tasks.
We propose a simple but effective unsupervised domain adaptation method, adversarial adaptation with distillation (AAD)
We evaluate our approach in the task of cross-domain sentiment classification on 30 domain pairs.
arXiv Detail & Related papers (2020-10-22T06:51:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.