Learning Target Domain Specific Classifier for Partial Domain Adaptation
- URL: http://arxiv.org/abs/2008.10785v1
- Date: Tue, 25 Aug 2020 02:28:24 GMT
- Title: Learning Target Domain Specific Classifier for Partial Domain Adaptation
- Authors: Chuan-Xian Ren, Pengfei Ge, Peiyi Yang, Shuicheng Yan
- Abstract summary: Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
- Score: 85.71584004185031
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation~(UDA) aims at reducing the distribution
discrepancy when transferring knowledge from a labeled source domain to an
unlabeled target domain. Previous UDA methods assume that the source and target
domains share an identical label space, which is unrealistic in practice since
the label information of the target domain is agnostic. This paper focuses on a
more realistic UDA scenario, i.e. partial domain adaptation (PDA), where the
target label space is subsumed to the source label space. In the PDA scenario,
the source outliers that are absent in the target domain may be wrongly matched
to the target domain (technically named negative transfer), leading to
performance degradation of UDA methods. This paper proposes a novel Target
Domain Specific Classifier Learning-based Domain Adaptation (TSCDA) method.
TSCDA presents a soft-weighed maximum mean discrepancy criterion to partially
align feature distributions and alleviate negative transfer. Also, it learns a
target-specific classifier for the target domain with pseudo-labels and
multiple auxiliary classifiers, to further address classifier shift. A module
named Peers Assisted Learning is used to minimize the prediction difference
between multiple target-specific classifiers, which makes the classifiers more
discriminant for the target domain. Extensive experiments conducted on three
PDA benchmark datasets show that TSCDA outperforms other state-of-the-art
methods with a large margin, e.g. $4\%$ and $5.6\%$ averagely on Office-31 and
Office-Home, respectively.
Related papers
- Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Class Conditional Alignment for Partial Domain Adaptation [10.506584969668792]
Adrial adaptation models have demonstrated significant progress towards transferring knowledge from a labeled source dataset to an unlabeled target dataset.
PDA investigates the scenarios in which the source domain is large and diverse, and the target label space is a subset of the source label space.
We propose a multi-class adversarial architecture for PDA.
arXiv Detail & Related papers (2020-03-14T23:51:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.