CDA: Contrastive-adversarial Domain Adaptation
- URL: http://arxiv.org/abs/2301.03826v1
- Date: Tue, 10 Jan 2023 07:43:21 GMT
- Title: CDA: Contrastive-adversarial Domain Adaptation
- Authors: Nishant Yadav, Mahbubul Alam, Ahmed Farahat, Dipanjan Ghosh, Chetan
Gupta, Auroop R. Ganguly
- Abstract summary: We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
- Score: 11.354043674822451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in domain adaptation reveal that adversarial learning on deep
neural networks can learn domain invariant features to reduce the shift between
source and target domains. While such adversarial approaches achieve
domain-level alignment, they ignore the class (label) shift. When
class-conditional data distributions are significantly different between the
source and target domain, it can generate ambiguous features near class
boundaries that are more likely to be misclassified. In this work, we propose a
two-stage model for domain adaptation called \textbf{C}ontrastive-adversarial
\textbf{D}omain \textbf{A}daptation \textbf{(CDA)}. While the adversarial
component facilitates domain-level alignment, two-stage contrastive learning
exploits class information to achieve higher intra-class compactness across
domains resulting in well-separated decision boundaries. Furthermore, the
proposed contrastive framework is designed as a plug-and-play module that can
be easily embedded with existing adversarial methods for domain adaptation. We
conduct experiments on two widely used benchmark datasets for domain
adaptation, namely, \textit{Office-31} and \textit{Digits-5}, and demonstrate
that CDA achieves state-of-the-art results on both datasets.
Related papers
- Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Unsupervised Domain Adaptation for Semantic Segmentation via Low-level
Edge Information Transfer [27.64947077788111]
Unsupervised domain adaptation for semantic segmentation aims to make models trained on synthetic data adapt to real images.
Previous feature-level adversarial learning methods only consider adapting models on the high-level semantic features.
We present the first attempt at explicitly using low-level edge information, which has a small inter-domain gap, to guide the transfer of semantic information.
arXiv Detail & Related papers (2021-09-18T11:51:31Z) - Multi-Level Features Contrastive Networks for Unsupervised Domain
Adaptation [6.934905764152813]
Unsupervised domain adaptation aims to train a model from the labeled source domain to make predictions on the unlabeled target domain.
Existing methods tend to align the two domains directly at the domain-level, or perform class-level domain alignment based on deep feature.
In this paper, we develop this work on the method of class-level alignment.
arXiv Detail & Related papers (2021-09-14T09:23:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Contrastive Learning and Self-Training for Unsupervised Domain
Adaptation in Semantic Segmentation [71.77083272602525]
UDA attempts to provide efficient knowledge transfer from a labeled source domain to an unlabeled target domain.
We propose a contrastive learning approach that adapts category-wise centroids across domains.
We extend our method with self-training, where we use a memory-efficient temporal ensemble to generate consistent and reliable pseudo-labels.
arXiv Detail & Related papers (2021-05-05T11:55:53Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z) - Contextual-Relation Consistent Domain Adaptation for Semantic
Segmentation [44.19436340246248]
This paper presents an innovative local contextual-relation consistent domain adaptation technique.
It aims to achieve local-level consistencies during the global-level alignment.
Experiments demonstrate its superior segmentation performance as compared with state-of-the-art methods.
arXiv Detail & Related papers (2020-07-05T19:00:46Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.