Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation
- URL: http://arxiv.org/abs/2204.00570v1
- Date: Fri, 1 Apr 2022 16:56:26 GMT
- Title: Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation
- Authors: Kendrick Shen, Robbie Jones, Ananya Kumar, Sang Michael Xie, Jeff Z.
HaoChen, Tengyu Ma, and Percy Liang
- Abstract summary: We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
- Score: 88.5448806952394
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider unsupervised domain adaptation (UDA), where labeled data from a
source domain (e.g., photographs) and unlabeled data from a target domain
(e.g., sketches) are used to learn a classifier for the target domain.
Conventional UDA methods (e.g., domain adversarial training) learn
domain-invariant features to improve generalization to the target domain. In
this paper, we show that contrastive pre-training, which learns features on
unlabeled source and target data and then fine-tunes on labeled source data, is
competitive with strong UDA methods. However, we find that contrastive
pre-training does not learn domain-invariant features, diverging from
conventional UDA intuitions. We show theoretically that contrastive
pre-training can learn features that vary subtantially across domains but still
generalize to the target domain, by disentangling domain and class information.
Our results suggest that domain invariance is not necessary for UDA. We
empirically validate our theory on benchmark vision datasets.
Related papers
- Make the U in UDA Matter: Invariant Consistency Learning for
Unsupervised Domain Adaptation [86.61336696914447]
We dub our approach "Invariant CONsistency learning" (ICON)
We propose to make the U in Unsupervised DA matter by giving equal status to the two domains.
ICON achieves the state-of-the-art performance on the classic UDA benchmarks: Office-Home and VisDA-2017, and outperforms all the conventional methods on the challenging WILDS 2.0 benchmark.
arXiv Detail & Related papers (2023-09-22T09:43:32Z) - Domain-Agnostic Prior for Transfer Semantic Segmentation [197.9378107222422]
Unsupervised domain adaptation (UDA) is an important topic in the computer vision community.
We present a mechanism that regularizes cross-domain representation learning with a domain-agnostic prior (DAP)
Our research reveals that UDA benefits much from better proxies, possibly from other data modalities.
arXiv Detail & Related papers (2022-04-06T09:13:25Z) - Domain Adaptation via Prompt Learning [39.97105851723885]
Unsupervised domain adaption (UDA) aims to adapt models learned from a well-annotated source domain to a target domain.
We introduce a novel prompt learning paradigm for UDA, named Domain Adaptation via Prompt Learning (DAPL)
arXiv Detail & Related papers (2022-02-14T13:25:46Z) - A Survey of Unsupervised Domain Adaptation for Visual Recognition [2.8935588665357077]
Domain Adaptation (DA) aims to mitigate the domain shift problem when transferring knowledge from one domain to another.
Unsupervised DA (UDA) deals with a labeled source domain and an unlabeled target domain.
arXiv Detail & Related papers (2021-12-13T15:55:23Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.