Unsupervised Robust Domain Adaptation: Paradigm, Theory and Algorithm
- URL: http://arxiv.org/abs/2511.11009v1
- Date: Fri, 14 Nov 2025 06:54:06 GMT
- Title: Unsupervised Robust Domain Adaptation: Paradigm, Theory and Algorithm
- Authors: Fuxiang Huang, Xiaowei Fu, Shiyu Ye, Lina Ma, Wen Li, Xinbo Gao, David Zhang, Lei Zhang,
- Abstract summary: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a label-rich source domain to an unlabeled target domain by addressing domain shifts.<n>Most UDA approaches emphasize transfer ability, but often overlook robustness against adversarial attacks.<n>This paper focuses on answering three key questions: 1) Why does VAT, known as its defensive effectiveness, fail in the UDA paradigm? 2) What is the generalization bound theory under attacks and how does it evolve from classical UDA theory?
- Score: 58.350254095190365
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) aims to transfer knowledge from a label-rich source domain to an unlabeled target domain by addressing domain shifts. Most UDA approaches emphasize transfer ability, but often overlook robustness against adversarial attacks. Although vanilla adversarial training (VAT) improves the robustness of deep neural networks, it has little effect on UDA. This paper focuses on answering three key questions: 1) Why does VAT, known for its defensive effectiveness, fail in the UDA paradigm? 2) What is the generalization bound theory under attacks and how does it evolve from classical UDA theory? 3) How can we implement a robustification training procedure without complex modifications? Specifically, we explore and reveal the inherent entanglement challenge in general UDA+VAT paradigm, and propose an unsupervised robust domain adaptation (URDA) paradigm. We further derive the generalization bound theory of the URDA paradigm so that it can resist adversarial noise and domain shift. To the best of our knowledge, this is the first time to establish the URDA paradigm and theory. We further introduce a simple, novel yet effective URDA algorithm called Disentangled Adversarial Robustness Training (DART), a two-step training procedure that ensures both transferability and robustness. DART first pre-trains an arbitrary UDA model, and then applies an instantaneous robustification post-training step via disentangled distillation.Experiments on four benchmark datasets with/without attacks show that DART effectively enhances robustness while maintaining domain adaptability, and validate the URDA paradigm and theory.
Related papers
- Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization [31.176062426569068]
Robust Unsupervised Domain Adaptation (RoUDA) aims to achieve not only clean but also robust cross-domain knowledge transfer.
We design a novel algorithm by utilizing the mutual information theory, dubbed MIRoUDA.
Our method surpasses the state-of-the-arts by a large margin.
arXiv Detail & Related papers (2024-06-19T03:19:34Z) - DART: A Principled Approach to Adversarially Robust Unsupervised Domain
Adaptation [27.574908007114015]
We develop a novel unified defense framework called Divergence Aware adveRsarial Training (DART)
DART is applicable to general threat models, including the popular $ell_p$-norm model.
We also release DomainRobust: a testbed for evaluating robustness of UDA models to adversarial attacks.
arXiv Detail & Related papers (2024-02-16T22:48:38Z) - How Useful is Continued Pre-Training for Generative Unsupervised Domain Adaptation? [23.454153602068786]
We evaluate the utility of Continued Pre-Training (CPT) for generative UDA.<n>Our findings suggest that a implicitly learns the downstream task while predicting masked words informative to that task.
arXiv Detail & Related papers (2024-01-31T00:15:34Z) - Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Prior Knowledge Guided Unsupervised Domain Adaptation [82.9977759320565]
We propose a Knowledge-guided Unsupervised Domain Adaptation (KUDA) setting where prior knowledge about the target class distribution is available.
In particular, we consider two specific types of prior knowledge about the class distribution in the target domain: Unary Bound and Binary Relationship.
We propose a rectification module that uses such prior knowledge to refine model generated pseudo labels.
arXiv Detail & Related papers (2022-07-18T18:41:36Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Adversarial Robustness for Unsupervised Domain Adaptation [48.51898925429575]
In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models.
Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training.
arXiv Detail & Related papers (2021-09-02T13:45:01Z) - A New Bidirectional Unsupervised Domain Adaptation Segmentation
Framework [27.13101555533594]
unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains.
In this paper, we propose a bidirectional UDA framework based on disentangled representation learning for equally competent two-way UDA performances.
arXiv Detail & Related papers (2021-08-18T05:25:11Z) - Consistency Regularization for Adversarial Robustness [88.65786118562005]
Adversarial training is one of the most successful methods to obtain the adversarial robustness of deep neural networks.
However, a significant generalization gap in the robustness obtained from AT has been problematic.
In this paper, we investigate data augmentation techniques to address the issue.
arXiv Detail & Related papers (2021-03-08T09:21:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.