SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2212.05917v1
- Date: Mon, 12 Dec 2022 14:25:40 GMT
- Title: SRoUDA: Meta Self-training for Robust Unsupervised Domain Adaptation
- Authors: Wanqing Zhu, Jia-Li Yin, Bo-Hao Chen, Ximeng Liu
- Abstract summary: Unsupervised domain adaptation (UDA) can transfer knowledge learned from rich-label dataset to unlabeled target dataset.
In this paper, we present a new meta self-training pipeline, named SRoUDA, for improving adversarial robustness of UDA models.
- Score: 25.939292305808934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As acquiring manual labels on data could be costly, unsupervised domain
adaptation (UDA), which transfers knowledge learned from a rich-label dataset
to the unlabeled target dataset, is gaining increasing popularity. While
extensive studies have been devoted to improving the model accuracy on target
domain, an important issue of model robustness is neglected. To make things
worse, conventional adversarial training (AT) methods for improving model
robustness are inapplicable under UDA scenario since they train models on
adversarial examples that are generated by supervised loss function. In this
paper, we present a new meta self-training pipeline, named SRoUDA, for
improving adversarial robustness of UDA models. Based on self-training
paradigm, SRoUDA starts with pre-training a source model by applying UDA
baseline on source labeled data and taraget unlabeled data with a developed
random masked augmentation (RMA), and then alternates between adversarial
target model training on pseudo-labeled target data and finetuning source model
by a meta step. While self-training allows the direct incorporation of AT in
UDA, the meta step in SRoUDA further helps in mitigating error propagation from
noisy pseudo labels. Extensive experiments on various benchmark datasets
demonstrate the state-of-the-art performance of SRoUDA where it achieves
significant model robustness improvement without harming clean accuracy. Code
is available at https://github.com/Vision.
Related papers
- PUMA: margin-based data pruning [51.12154122266251]
We focus on data pruning, where some training samples are removed based on the distance to the model classification boundary (i.e., margin)
We propose PUMA, a new data pruning strategy that computes the margin using DeepFool.
We show that PUMA can be used on top of the current state-of-the-art methodology in robustness, and it is able to significantly improve the model performance unlike the existing data pruning strategies.
arXiv Detail & Related papers (2024-05-10T08:02:20Z) - Latent Code Augmentation Based on Stable Diffusion for Data-free Substitute Attacks [47.84143701817491]
Since the training data of the target model is not available in the black-box substitute attack, most recent schemes utilize GANs to generate data for training the substitute model.
We propose a novel data-free substitute attack scheme based on the Stable Diffusion (SD) to improve the efficiency and accuracy of substitute training.
arXiv Detail & Related papers (2023-07-24T15:10:22Z) - Can We Evaluate Domain Adaptation Models Without Target-Domain Labels? [36.05871459064825]
Unsupervised domain adaptation (UDA) involves adapting a model trained on a label-rich source domain to an unlabeled target domain.
In real-world scenarios, the absence of target-domain labels makes it challenging to evaluate the performance of UDA models.
We propose a novel metric called the textitTransfer Score to address these issues.
arXiv Detail & Related papers (2023-05-30T03:36:40Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - MAPS: A Noise-Robust Progressive Learning Approach for Source-Free
Domain Adaptive Keypoint Detection [76.97324120775475]
Cross-domain keypoint detection methods always require accessing the source data during adaptation.
This paper considers source-free domain adaptive keypoint detection, where only the well-trained source model is provided to the target domain.
arXiv Detail & Related papers (2023-02-09T12:06:08Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Back to the Source: Diffusion-Driven Test-Time Adaptation [77.4229736436935]
Test-time adaptation harnesses test inputs to improve accuracy of a model trained on source data when tested on shifted target data.
We instead update the target data, by projecting all test inputs toward the source domain with a generative diffusion model.
arXiv Detail & Related papers (2022-07-07T17:14:10Z) - Distill and Fine-tune: Effective Adaptation from a Black-box Source
Model [138.12678159620248]
Unsupervised domain adaptation (UDA) aims to transfer knowledge in previous related labeled datasets (source) to a new unlabeled dataset (target)
We propose a novel two-step adaptation framework called Distill and Fine-tune (Dis-tune)
arXiv Detail & Related papers (2021-04-04T05:29:05Z) - Robustified Domain Adaptation [13.14535125302501]
Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
arXiv Detail & Related papers (2020-11-18T22:21:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.