Robustified Domain Adaptation
- URL: http://arxiv.org/abs/2011.09563v2
- Date: Wed, 24 Mar 2021 21:18:25 GMT
- Title: Robustified Domain Adaptation
- Authors: Jiajin Zhang, Hanqing Chao, Pingkun Yan
- Abstract summary: Unsupervised domain adaptation (UDA) is widely used to transfer knowledge from a labeled source domain to an unlabeled target domain.
The inevitable domain distribution deviation in UDA is a critical barrier to model robustness on the target domain.
We propose a novel Class-consistent Unsupervised Domain Adaptation (CURDA) framework for training robust UDA models.
- Score: 13.14535125302501
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation (UDA) is widely used to transfer knowledge
from a labeled source domain to an unlabeled target domain with different data
distribution. While extensive studies attested that deep learning models are
vulnerable to adversarial attacks, the adversarial robustness of models in
domain adaptation application has largely been overlooked. This paper points
out that the inevitable domain distribution deviation in UDA is a critical
barrier to model robustness on the target domain. To address the problem, we
propose a novel Class-consistent Unsupervised Robust Domain Adaptation (CURDA)
framework for training robust UDA models. With the introduced contrastive
robust training and source anchored adversarial contrastive losses, our
proposed CURDA framework can effectively robustify UDA models by simultaneously
minimizing the data distribution deviation and the distance between target
domain clean-adversarial pairs without creating classification confusion.
Experiments on several public benchmarks show that CURDA can significantly
improve model robustness in the target domain with only minor cost of accuracy
on the clean samples.
Related papers
- Unveiling the Superior Paradigm: A Comparative Study of Source-Free Domain Adaptation and Unsupervised Domain Adaptation [52.36436121884317]
We show that Source-Free Domain Adaptation (SFDA) generally outperforms Unsupervised Domain Adaptation (UDA) in real-world scenarios.
SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting.
We propose a novel weight estimation method that effectively integrates available source data into multi-SFDA approaches.
arXiv Detail & Related papers (2024-11-24T13:49:29Z) - DACAD: Domain Adaptation Contrastive Learning for Anomaly Detection in Multivariate Time Series [25.434379659643707]
In time series anomaly detection, the scarcity of labeled data poses a challenge to the development of accurate models.
We propose a novel Domain Contrastive learning model for Anomaly Detection in time series (DACAD)
Our model employs supervised contrastive loss for the source domain and self-supervised contrastive triplet loss for the target domain.
arXiv Detail & Related papers (2024-04-17T11:20:14Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Distributionally Robust Domain Adaptation [12.02023514105999]
Domain Adaptation (DA) has recently received significant attention due to its potential to adapt a learning model across source and target domains with mismatched distributions.
In this paper, we propose DRDA, a distributionally robust domain adaptation method.
arXiv Detail & Related papers (2022-10-30T17:29:22Z) - Exploring Adversarially Robust Training for Unsupervised Domain
Adaptation [71.94264837503135]
Unsupervised Domain Adaptation (UDA) methods aim to transfer knowledge from a labeled source domain to an unlabeled target domain.
This paper explores how to enhance the unlabeled data robustness via AT while learning domain-invariant features for UDA.
We propose a novel Adversarially Robust Training method for UDA accordingly, referred to as ARTUDA.
arXiv Detail & Related papers (2022-02-18T17:05:19Z) - Boosting Unsupervised Domain Adaptation with Soft Pseudo-label and
Curriculum Learning [19.903568227077763]
Unsupervised domain adaptation (UDA) improves classification performance on an unlabeled target domain by leveraging data from a fully labeled source domain.
We propose a model-agnostic two-stage learning framework, which greatly reduces flawed model predictions using soft pseudo-label strategy.
At the second stage, we propose a curriculum learning strategy to adaptively control the weighting between losses from the two domains.
arXiv Detail & Related papers (2021-12-03T14:47:32Z) - Adversarial Robustness for Unsupervised Domain Adaptation [48.51898925429575]
In this work, we leverage intermediate representations learned by multiple robust ImageNet models to improve the robustness of UDA models.
Our method works by aligning the features of the UDA model with the robust features learned by ImageNet pre-trained models along with domain adaptation training.
arXiv Detail & Related papers (2021-09-02T13:45:01Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - UDALM: Unsupervised Domain Adaptation through Language Modeling [79.73916345178415]
We introduce UDALM, a fine-tuning procedure, using a mixed classification and Masked Language Model loss.
Our experiments show that performance of models trained with the mixed loss scales with the amount of available target data can be effectively used as a stopping criterion.
Our method is evaluated on twelve domain pairs of the Amazon Reviews Sentiment dataset, yielding $91.74%$ accuracy, which is an $1.11%$ absolute improvement over the state-of-versathe-art.
arXiv Detail & Related papers (2021-04-14T19:05:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.