Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels
- URL: http://arxiv.org/abs/2003.08264v1
- Date: Wed, 18 Mar 2020 15:11:07 GMT
- Title: Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels
- Authors: Donghyun Kim, Kuniaki Saito, Tae-Hyun Oh, Bryan A. Plummer, Stan
Sclaroff, and Kate Saenko
- Abstract summary: We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
- Score: 78.95901454696158
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing unsupervised domain adaptation methods aim to transfer knowledge
from a label-rich source domain to an unlabeled target domain. However,
obtaining labels for some source domains may be very expensive, making complete
labeling as used in prior work impractical. In this work, we investigate a new
domain adaptation scenario with sparsely labeled source data, where only a few
examples in the source domain have been labeled, while the target domain is
unlabeled. We show that when labeled source examples are limited, existing
methods often fail to learn discriminative features applicable for both source
and target domains. We propose a novel Cross-Domain Self-supervised (CDS)
learning approach for domain adaptation, which learns features that are not
only domain-invariant but also class-discriminative. Our self-supervised
learning method captures apparent visual similarity with in-domain
self-supervision in a domain adaptive manner and performs cross-domain feature
matching with across-domain self-supervision. In extensive experiments with
three standard benchmark datasets, our method significantly boosts performance
of target accuracy in the new target domain with few source labels and is even
helpful on classical domain adaptation scenarios.
Related papers
- DomainInv: Domain Invariant Fine Tuning and Adversarial Label Correction
For QA Domain Adaptation [27.661609140918916]
Existing Question Answering (QA) systems limited by the capability of answering questions from unseen domain or any out-of-domain distributions.
Most importantly all the existing QA domain adaptation methods are either based on generating synthetic data or pseudo labeling the target domain data.
In this paper, we propose the unsupervised domain adaptation for unlabeled target domain by transferring the target representation near to source domain while still using the supervision from source domain.
arXiv Detail & Related papers (2023-05-04T18:13:17Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation [91.58443042554903]
We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
arXiv Detail & Related papers (2021-03-31T02:07:42Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.