Domain Adaptation with Auxiliary Target Domain-Oriented Classifier
- URL: http://arxiv.org/abs/2007.04171v5
- Date: Thu, 16 Dec 2021 01:34:13 GMT
- Title: Domain Adaptation with Auxiliary Target Domain-Oriented Classifier
- Authors: Jian Liang and Dapeng Hu and Jiashi Feng
- Abstract summary: Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
- Score: 115.39091109079622
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation (DA) aims to transfer knowledge from a label-rich but
heterogeneous domain to a label-scare domain, which alleviates the labeling
efforts and attracts considerable attention. Different from previous methods
focusing on learning domain-invariant feature representations, some recent
methods present generic semi-supervised learning (SSL) techniques and directly
apply them to DA tasks, even achieving competitive performance. One of the most
popular SSL techniques is pseudo-labeling that assigns pseudo labels for each
unlabeled data via the classifier trained by labeled data. However, it ignores
the distribution shift in DA problems and is inevitably biased to source data.
To address this issue, we propose a new pseudo-labeling framework called
Auxiliary Target Domain-Oriented Classifier (ATDOC). ATDOC alleviates the
classifier bias by introducing an auxiliary classifier for target data only, to
improve the quality of pseudo labels. Specifically, we employ the memory
mechanism and develop two types of non-parametric classifiers, i.e. the nearest
centroid classifier and neighborhood aggregation, without introducing any
additional network parameters. Despite its simplicity in a pseudo
classification objective, ATDOC with neighborhood aggregation significantly
outperforms domain alignment techniques and prior SSL techniques on a large
variety of DA benchmarks and even scare-labeled SSL tasks.
Related papers
- Domain Adaptation Using Pseudo Labels [16.79672078512152]
In the absence of labeled target data, unsupervised domain adaptation approaches seek to align the marginal distributions of the source and target domains.
We deploy a pretrained network to determine accurate labels for the target domain using a multi-stage pseudo-label refinement procedure.
Our results on multiple datasets demonstrate the effectiveness of our simple procedure in comparison with complex state-of-the-art techniques.
arXiv Detail & Related papers (2024-02-09T22:15:11Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Cycle Label-Consistent Networks for Unsupervised Domain Adaptation [57.29464116557734]
Domain adaptation aims to leverage a labeled source domain to learn a classifier for the unlabeled target domain with a different distribution.
We propose a simple yet efficient domain adaptation method, i.e. Cycle Label-Consistent Network (CLCN), by exploiting the cycle consistency of classification label.
We demonstrate the effectiveness of our approach on MNIST-USPS-SVHN, Office-31, Office-Home and Image CLEF-DA benchmarks.
arXiv Detail & Related papers (2022-05-27T13:09:08Z) - Progressively Select and Reject Pseudo-labelled Samples for Open-Set
Domain Adaptation [26.889303784575805]
Domain adaptation solves image classification problems in the target domain by taking advantage of the labelled source data and unlabelled target data.
Our proposed method learns discriminative common subspaces for the source and target domains using a novel Open-Set Locality Preserving Projection (OSLPP) algorithm.
The common subspace learning and the pseudo-labelled sample selection/rejection facilitate each other in an iterative learning framework.
arXiv Detail & Related papers (2021-10-25T04:28:55Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.