Contrastive Domain Adaptation
- URL: http://arxiv.org/abs/2103.15566v1
- Date: Fri, 26 Mar 2021 13:55:19 GMT
- Title: Contrastive Domain Adaptation
- Authors: Mamatha Thota and Georgios Leontidis
- Abstract summary: We propose to extend contrastive learning to a new domain adaptation setting.
Contrastive learning learns by comparing and contrasting positive and negative pairs of samples in an unsupervised setting.
We have developed a variation of a recently proposed contrastive learning framework that helps tackle the domain adaptation problem.
- Score: 4.822598110892847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, contrastive self-supervised learning has become a key component for
learning visual representations across many computer vision tasks and
benchmarks. However, contrastive learning in the context of domain adaptation
remains largely underexplored. In this paper, we propose to extend contrastive
learning to a new domain adaptation setting, a particular situation occurring
where the similarity is learned and deployed on samples following different
probability distributions without access to labels. Contrastive learning learns
by comparing and contrasting positive and negative pairs of samples in an
unsupervised setting without access to source and target labels. We have
developed a variation of a recently proposed contrastive learning framework
that helps tackle the domain adaptation problem, further identifying and
removing possible negatives similar to the anchor to mitigate the effects of
false negatives. Extensive experiments demonstrate that the proposed method
adapts well, and improves the performance on the downstream domain adaptation
task.
Related papers
- Time-Series Contrastive Learning against False Negatives and Class Imbalance [17.43801009251228]
We conduct theoretical analysis and find they have overlooked the fundamental issues: false negatives and class imbalance inherent in the InfoNCE loss-based framework.
We introduce a straightforward modification grounded in the SimCLR framework, universally to models engaged in the instance discrimination task.
We perform semi-supervised consistency classification and enhance the representative ability of minority classes.
arXiv Detail & Related papers (2023-12-19T08:38:03Z) - Learning Transferable Adversarial Robust Representations via Multi-view
Consistency [57.73073964318167]
We propose a novel meta-adversarial multi-view representation learning framework with dual encoders.
We demonstrate the effectiveness of our framework on few-shot learning tasks from unseen domains.
arXiv Detail & Related papers (2022-10-19T11:48:01Z) - Contrast and Mix: Temporal Contrastive Video Domain Adaptation with
Background Mixing [55.73722120043086]
We introduce Contrast and Mix (CoMix), a new contrastive learning framework that aims to learn discriminative invariant feature representations for unsupervised video domain adaptation.
First, we utilize temporal contrastive learning to bridge the domain gap by maximizing the similarity between encoded representations of an unlabeled video at two different speeds.
Second, we propose a novel extension to the temporal contrastive loss by using background mixing that allows additional positives per anchor, thus adapting contrastive learning to leverage action semantics shared across both domains.
arXiv Detail & Related papers (2021-10-28T14:03:29Z) - Dense Contrastive Visual-Linguistic Pretraining [53.61233531733243]
Several multimodal representation learning approaches have been proposed that jointly represent image and text.
These approaches achieve superior performance by capturing high-level semantic information from large-scale multimodal pretraining.
We propose unbiased Dense Contrastive Visual-Linguistic Pretraining to replace the region regression and classification with cross-modality region contrastive learning.
arXiv Detail & Related papers (2021-09-24T07:20:13Z) - Incremental False Negative Detection for Contrastive Learning [95.68120675114878]
We introduce a novel incremental false negative detection for self-supervised contrastive learning.
During contrastive learning, we discuss two strategies to explicitly remove the detected false negatives.
Our proposed method outperforms other self-supervised contrastive learning frameworks on multiple benchmarks within a limited compute.
arXiv Detail & Related papers (2021-06-07T15:29:14Z) - Domain Adaptation for Semantic Segmentation via Patch-Wise Contrastive
Learning [62.7588467386166]
We leverage contrastive learning to bridge the domain gap by aligning the features of structurally similar label patches across domains.
Our approach consistently outperforms state-of-the-art unsupervised and semi-supervised methods on two challenging domain adaptive segmentation tasks.
arXiv Detail & Related papers (2021-04-22T13:39:12Z) - Selective Pseudo-Labeling with Reinforcement Learning for
Semi-Supervised Domain Adaptation [116.48885692054724]
We propose a reinforcement learning based selective pseudo-labeling method for semi-supervised domain adaptation.
We develop a deep Q-learning model to select both accurate and representative pseudo-labeled instances.
Our proposed method is evaluated on several benchmark datasets for SSDA, and demonstrates superior performance to all the comparison methods.
arXiv Detail & Related papers (2020-12-07T03:37:38Z) - On Mutual Information in Contrastive Learning for Visual Representations [19.136685699971864]
unsupervised, "contrastive" learning algorithms in vision have been shown to learn representations that perform remarkably well on transfer tasks.
We show that this family of algorithms maximizes a lower bound on the mutual information between two or more "views" of an image.
We find that the choice of negative samples and views are critical to the success of these algorithms.
arXiv Detail & Related papers (2020-05-27T04:21:53Z) - Discriminative Active Learning for Domain Adaptation [16.004653151961303]
We introduce a discriminative active learning approach for domain adaptation to reduce the efforts of data annotation.
Specifically, we propose three-stage active adversarial training of neural networks.
Empirical comparisons with existing domain adaptation methods using four benchmark datasets demonstrate the effectiveness of the proposed approach.
arXiv Detail & Related papers (2020-05-24T04:20:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.