Multi-Level Features Contrastive Networks for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2109.06543v1
- Date: Tue, 14 Sep 2021 09:23:27 GMT
- Title: Multi-Level Features Contrastive Networks for Unsupervised Domain
Adaptation
- Authors: Le Liu, Jieren Cheng, Boyi Liu, Yue Yang, Ke Zhou, Qiaobo Da
- Abstract summary: Unsupervised domain adaptation aims to train a model from the labeled source domain to make predictions on the unlabeled target domain.
Existing methods tend to align the two domains directly at the domain-level, or perform class-level domain alignment based on deep feature.
In this paper, we develop this work on the method of class-level alignment.
- Score: 6.934905764152813
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation aims to train a model from the labeled source
domain to make predictions on the unlabeled target domain when the data
distribution of the two domains is different. As a result, it needs to reduce
the data distribution difference between the two domains to improve the model's
generalization ability. Existing methods tend to align the two domains directly
at the domain-level, or perform class-level domain alignment based on deep
feature. The former ignores the relationship between the various classes in the
two domains, which may cause serious negative transfer, the latter alleviates
it by introducing pseudo-labels of the target domain, but it does not consider
the importance of performing class-level alignment on shallow feature
representations. In this paper, we develop this work on the method of
class-level alignment. The proposed method reduces the difference between two
domains dramaticlly by aligning multi-level features. In the case that the two
domains share the label space, the class-level alignment is implemented by
introducing Multi-Level Feature Contrastive Networks (MLFCNet). In practice,
since the categories of samples in target domain are unavailable, we
iteratively use clustering algorithm to obtain the pseudo-labels, and then
minimize Multi-Level Contrastive Discrepancy (MLCD) loss to achieve more
accurate class-level alignment. Experiments on three real-world benchmarks
ImageCLEF-DA, Office-31 and Office-Home demonstrate that MLFCNet compares
favorably against the existing state-of-the-art domain adaptation methods.
Related papers
- CDA: Contrastive-adversarial Domain Adaptation [11.354043674822451]
We propose a two-stage model for domain adaptation called textbfContrastive-adversarial textbfDomain textbfAdaptation textbf(CDA).
While the adversarial component facilitates domain-level alignment, two-stage contrastive learning exploits class information to achieve higher intra-class compactness across domains.
arXiv Detail & Related papers (2023-01-10T07:43:21Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - More Separable and Easier to Segment: A Cluster Alignment Method for
Cross-Domain Semantic Segmentation [41.81843755299211]
We propose a new UDA semantic segmentation approach based on domain assumption closeness to alleviate the above problems.
Specifically, a prototype clustering strategy is applied to cluster pixels with the same semantic, which will better maintain associations among target domain pixels.
Experiments conducted on GTA5 and SYNTHIA proved the effectiveness of our method.
arXiv Detail & Related papers (2021-05-07T10:24:18Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Select, Label, and Mix: Learning Discriminative Invariant Feature
Representations for Partial Domain Adaptation [55.73722120043086]
We develop a "Select, Label, and Mix" (SLM) framework to learn discriminative invariant feature representations for partial domain adaptation.
First, we present a simple yet efficient "select" module that automatically filters out outlier source samples to avoid negative transfer.
Second, the "label" module iteratively trains the classifier using both the labeled source domain data and the generated pseudo-labels for the target domain to enhance the discriminability of the latent space.
arXiv Detail & Related papers (2020-12-06T19:29:32Z) - Cross-domain Detection via Graph-induced Prototype Alignment [114.8952035552862]
We propose a Graph-induced Prototype Alignment (GPA) framework to seek for category-level domain alignment.
In addition, in order to alleviate the negative effect of class-imbalance on domain adaptation, we design a Class-reweighted Contrastive Loss.
Our approach outperforms existing methods with a remarkable margin.
arXiv Detail & Related papers (2020-03-28T17:46:55Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.