CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement
- URL: http://arxiv.org/abs/2205.13579v2
- Date: Mon, 30 May 2022 11:40:44 GMT
- Title: CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement
- Authors: Can Zhang, Gim Hee Lee
- Abstract summary: unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
- Score: 84.10513481953583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works on unsupervised domain adaptation (UDA) focus on the selection
of good pseudo-labels as surrogates for the missing labels in the target data.
However, source domain bias that deteriorates the pseudo-labels can still exist
since the shared network of the source and target domains are typically used
for the pseudo-label selections. The suboptimal feature space source-to-target
domain alignment can also result in unsatisfactory performance. In this paper,
we propose CA-UDA to improve the quality of the pseudo-labels and UDA results
with optimal assignment, a pseudo-label refinement strategy and class-aware
domain alignment. We use an auxiliary network to mitigate the source domain
bias for pseudo-label refinement. Our intuition is that the underlying
semantics in the target domain can be fully exploited to help refine the
pseudo-labels that are inferred from the source features under domain shift.
Furthermore, our optimal assignment can optimally align features in the
source-to-target domains and our class-aware domain alignment can
simultaneously close the domain gap while preserving the classification
decision boundaries. Extensive experiments on several benchmark datasets show
that our method can achieve state-of-the-art performance in the image
classification task.
Related papers
- Deep Feature Registration for Unsupervised Domain Adaptation [15.246480756974963]
We propose a deep feature registration (DFR) model to generate registered features that maintain domain invariant features.
We also employ a pseudo label refinement process to improve the quality of pseudo labels in the target domain.
arXiv Detail & Related papers (2023-10-24T18:04:53Z) - DomainInv: Domain Invariant Fine Tuning and Adversarial Label Correction
For QA Domain Adaptation [27.661609140918916]
Existing Question Answering (QA) systems limited by the capability of answering questions from unseen domain or any out-of-domain distributions.
Most importantly all the existing QA domain adaptation methods are either based on generating synthetic data or pseudo labeling the target domain data.
In this paper, we propose the unsupervised domain adaptation for unlabeled target domain by transferring the target representation near to source domain while still using the supervision from source domain.
arXiv Detail & Related papers (2023-05-04T18:13:17Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.