Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2202.09300v1
- Date: Fri, 18 Feb 2022 17:05:19 GMT
- Title: Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation
- Authors: Shao-Yuan Lo and Vishal M. Patel
- Abstract summary: Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
- Score: 71.94264837503135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a
labeled source domain to an unlabeled target domain. UDA has been extensively
studied in the computer vision literature. Deep networks have been shown to be
vulnerable to adversarial attacks. However, very little focus is devoted to
improving the adversarial robustness of deep UDA models, causing serious
concerns about model reliability. Adversarial Training (AT) has been considered
to be the most successful adversarial defense approach. Nevertheless,
conventional AT requires ground-truth labels to generate adversarial examples
and train models, which limits its effectiveness in the unlabeled target
domain. In this paper, we aim to explore AT to robustify UDA models: How to
enhance the unlabeled data robustness via AT while learning domain-invariant
features for UDA? To answer this, we provide a systematic study into multiple
AT variants that potentially apply to UDA. Moreover, we propose a novel
Adversarially Robust Training method for UDA accordingly, referred to as
ARTUDA. Extensive experiments on multiple attacks and benchmarks show that
ARTUDA consistently improves the adversarial robustness of UDA models.
Related papers
- Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization [31.176062426569068]
Robust Unsupervised Domain Adaptation (RoUDA) aims to achieve not only clean but also robust cross-domain knowledge transfer.
We design a novel algorithm by utilizing the mutual information theory, dubbed MIRoUDA.
Our method surpasses the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2024-06-19T03:19:34Z) - Can We Evaluate Domain Adaptation Models Without Target-Domain Labels? [36.05871459064825]
Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.
In real-world scenarios, the absence of target-domain labels makes it challenging to evaluate the performance of UDA models.
We propose a novel metric called the textitTransfer Score to address these issues.
arXiv Detail & Related papers (2023-05-30T03:36:40Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Adversarial Robustness for Unsupervised Domain Adaptation [48.51898925429575]
In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models.
Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training.
arXiv Detail & Related papers (2021-09-02T13:45:01Z) - Understanding the Limits of Unsupervised Domain Adaptation via Data
Poisoning [66.80663779176979]
Unsupervised domain adaptation (UDA) enables cross-domain learning without target domain labels.
We show the insufficiency of minimizing source domain error and marginal distribution mismatch for a guaranteed reduction in the target domain error.
Motivated from this, we propose novel data poisoning attacks to fool UDA methods into learning representations that produce large target domain errors.
arXiv Detail & Related papers (2021-07-08T15:51:14Z) - Exploring Robustness of Unsupervised Domain Adaptation in Semantic
Segmentation [74.05906222376608]
We propose adversarial self-supervision UDA (or ASSUDA) that maximizes the agreement between clean images and their adversarial examples by a contrastive loss in the output space.
This paper is rooted in two observations: (i) the robustness of UDA methods in semantic segmentation remains unexplored, which pose a security concern in this field; and (ii) although commonly used self-supervision (e.g., rotation and jigsaw) benefits image tasks such as classification and recognition, they fail to provide the critical supervision signals that could learn discriminative representation for segmentation tasks.
arXiv Detail & Related papers (2021-05-23T01:50:44Z) - Consistency Regularization for Adversarial Robustness [88.65786118562005]
Adversarial training is one of the most successful methods to obtain the adversarial robustness of deep neural networks.
However, a significant generalization gap in the robustness obtained from AT has been problematic.
In this paper, we investigate data augmentation techniques to address the issue.
arXiv Detail & Related papers (2021-03-08T09:21:41Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z) - Adversarial Distributional Training for Robust Deep Learning [53.300984501078126]
Adversarial training (AT) is among the most effective techniques to improve model robustness by augmenting training data with adversarial examples.
Most existing AT methods adopt a specific attack to craft adversarial examples, leading to the unreliable robustness against other unseen attacks.
In this paper, we introduce adversarial distributional training (ADT), a novel framework for learning robust models.
arXiv Detail & Related papers (2020-02-14T12:36:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.