Cycle Label-Consistent Networks for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2205.13957v1
- Date: Fri, 27 May 2022 13:09:08 GMT
- Title: Cycle Label-Consistent Networks for Unsupervised Domain Adaptation
- Authors: Mei Wang, Weihong Deng
- Abstract summary: Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
- Score: 57.29464116557734
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation aims to leverage a labeled source domain to learn a
classifier for the unlabeled target domain with a different distribution.
Previous methods mostly match the distribution between two domains by global or
class alignment. However, global alignment methods cannot achieve a
fine-grained class-to-class overlap; class alignment methods supervised by
pseudo-labels cannot guarantee their reliability. In this paper, we propose a
simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent
Network (CLCN), by exploiting the cycle consistency of classification label,
which applies dual cross-domain nearest centroid classification procedures to
generate a reliable self-supervised signal for the discrimination in the target
domain. The cycle label-consistent loss reinforces the consistency between
ground-truth labels and pseudo-labels of source samples leading to
statistically similar latent representations between source and target domains.
This new loss can easily be added to any existing classification network with
almost no computational overhead. We demonstrate the effectiveness of our
approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA
benchmarks. Results validate that the proposed method can alleviate the
negative influence of falsely-labeled samples and learn more discriminative
features, leading to the absolute improvement over source-only model by 9.4% on
Office-31 and 6.3% on Image CLEF-DA.
Related papers
- centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Effective Label Propagation for Discriminative Semi-Supervised Domain
Adaptation [76.41664929948607]
Semi-supervised domain adaptation (SSDA) methods have demonstrated great potential in large-scale image classification tasks.
We present a novel and effective method to tackle this problem by using effective inter-domain and intra-domain semantic information propagation.
Our source code and pre-trained models will be released soon.
arXiv Detail & Related papers (2020-12-04T14:28:19Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.