Transferrable Contrastive Learning for Visual Domain Adaptation
- URL: http://arxiv.org/abs/2112.07516v1
- Date: Tue, 14 Dec 2021 16:23:01 GMT
- Title: Transferrable Contrastive Learning for Visual Domain Adaptation
- Authors: Yang Chen and Yingwei Pan and Yu Wang and Ting Yao and Xinmei Tian and
Tao Mei
- Abstract summary: Transferrable Contrastive Learning (TCL) is a self-supervised learning paradigm tailored for domain adaptation.
TCL penalizes cross-domain intra-class domain discrepancy between source and target through a clean and novel contrastive loss.
The free lunch is, thanks to the incorporation of contrastive learning, TCL relies on a moving-averaged key encoder that naturally achieves a temporally ensembled version of pseudo labels for target data.
- Score: 108.98041306507372
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) has recently become the favorite among feature
learning methodologies. It is therefore appealing for domain adaptation
approaches to consider incorporating SSL. The intuition is to enforce
instance-level feature consistency such that the predictor becomes somehow
invariant across domains. However, most existing SSL methods in the regime of
domain adaptation usually are treated as standalone auxiliary components,
leaving the signatures of domain adaptation unattended. Actually, the optimal
region where the domain gap vanishes and the instance level constraint that SSL
peruses may not coincide at all. From this point, we present a particular
paradigm of self-supervised learning tailored for domain adaptation, i.e.,
Transferrable Contrastive Learning (TCL), which links the SSL and the desired
cross-domain transferability congruently. We find contrastive learning
intrinsically a suitable candidate for domain adaptation, as its instance
invariance assumption can be conveniently promoted to cross-domain class-level
invariance favored by domain adaptation tasks. Based on particular memory bank
constructions and pseudo label strategies, TCL then penalizes cross-domain
intra-class domain discrepancy between source and target through a clean and
novel contrastive loss. The free lunch is, thanks to the incorporation of
contrastive learning, TCL relies on a moving-averaged key encoder that
naturally achieves a temporally ensembled version of pseudo labels for target
data, which avoids pseudo label error propagation at no extra cost. TCL
therefore efficiently reduces cross-domain gaps. Through extensive experiments
on benchmarks (Office-Home, VisDA-2017, Digits-five, PACS and DomainNet) for
both single-source and multi-source domain adaptation tasks, TCL has
demonstrated state-of-the-art performances.
Related papers
- Towards domain-invariant Self-Supervised Learning with Batch Styles
Standardization [1.6060199783864477]
Batch Styles Standardization (BSS) is a simple yet powerful method to standardize the style of images in a batch.
We show that BSS significantly improves downstream task performances on unseen domains, often outperforming or rivaling UDG methods.
arXiv Detail & Related papers (2023-03-10T17:09:04Z) - Prototypical Contrast Adaptation for Domain Adaptive Semantic
Segmentation [52.63046674453461]
Prototypical Contrast Adaptation (ProCA) is a contrastive learning method for unsupervised domain adaptive semantic segmentation.
ProCA incorporates inter-class information into class-wise prototypes, and adopts the class-centered distribution alignment for adaptation.
arXiv Detail & Related papers (2022-07-14T04:54:26Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Semi-supervised Domain Adaptive Structure Learning [72.01544419893628]
Semi-supervised domain adaptation (SSDA) is a challenging problem requiring methods to overcome both 1) overfitting towards poorly annotated data and 2) distribution shift across domains.
We introduce an adaptive structure learning method to regularize the cooperation of SSL and DA.
arXiv Detail & Related papers (2021-12-12T06:11:16Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.