Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2103.16765v1
- Date: Wed, 31 Mar 2021 02:07:42 GMT
- Title: Prototypical Cross-domain Self-supervised Learning for Few-shot
Unsupervised Domain Adaptation
- Authors: Xiangyu Yue, Zangwei Zheng, Shanghang Zhang, Yang Gao, Trevor Darrell,
Kurt Keutzer, Alberto Sangiovanni Vincentelli
- Abstract summary: We propose an end-to-end Prototypical Cross-domain Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain Adaptation (FUDA)
PCS not only performs cross-domain low-level feature alignment, but it also encodes and aligns semantic structures in the shared embedding space across domains.
Compared with state-of-the-art methods, PCS improves the mean classification accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on Office, Office-Home, VisDA-2017, and DomainNet, respectively.
- Score: 91.58443042554903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) transfers predictive models from a
fully-labeled source domain to an unlabeled target domain. In some
applications, however, it is expensive even to collect labels in the source
domain, making most previous works impractical. To cope with this problem,
recent work performed instance-wise cross-domain self-supervised learning,
followed by an additional fine-tuning stage. However, the instance-wise
self-supervised learning only learns and aligns low-level discriminative
features. In this paper, we propose an end-to-end Prototypical Cross-domain
Self-Supervised Learning (PCS) framework for Few-shot Unsupervised Domain
Adaptation (FUDA). PCS not only performs cross-domain low-level feature
alignment, but it also encodes and aligns semantic structures in the shared
embedding space across domains. Our framework captures category-wise semantic
structures of the data by in-domain prototypical contrastive learning; and
performs feature alignment through cross-domain prototypical self-supervision.
Compared with state-of-the-art methods, PCS improves the mean classification
accuracy over different domain pairs on FUDA by 10.5%, 3.5%, 9.0%, and 13.2% on
Office, Office-Home, VisDA-2017, and DomainNet, respectively. Our project page
is at http://xyue.io/pcs-fuda/index.html
Related papers
- Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Classes Matter: A Fine-grained Adversarial Approach to Cross-domain
Semantic Segmentation [95.10255219396109]
We propose a fine-grained adversarial learning strategy for class-level feature alignment.
We adopt a fine-grained domain discriminator that not only plays as a domain distinguisher, but also differentiates domains at class level.
An analysis with Class Center Distance (CCD) validates that our fine-grained adversarial strategy achieves better class-level alignment.
arXiv Detail & Related papers (2020-07-17T20:50:59Z) - Contradistinguisher: A Vapnik's Imperative to Unsupervised Domain
Adaptation [7.538482310185133]
We propose a model referred Contradistinguisher that learns contrastive features and whose objective is to jointly learn to contradistinguish the unlabeled target domain in an unsupervised way.
We achieve the state-of-the-art on Office-31 and VisDA-2017 datasets in both single-source and multi-source settings.
arXiv Detail & Related papers (2020-05-25T19:54:38Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.