Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2208.01195v1
- Date: Tue, 2 Aug 2022 01:38:37 GMT
- Title: Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation
- Authors: Wenxuan Ma, Jinming Zhang, Shuang Li, Chi Harold Liu, Yulin Wang, Wei
Li
- Abstract summary: Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
- Score: 31.150256154504696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Extensive studies on Unsupervised Domain Adaptation (UDA) have propelled the
deployment of deep learning from limited experimental datasets into real-world
unconstrained domains. Most UDA approaches align features within a common
embedding space and apply a shared classifier for target prediction. However,
since a perfectly aligned feature space may not exist when the domain
discrepancy is large, these methods suffer from two limitations. First, the
coercive domain alignment deteriorates target domain discriminability due to
lacking target label supervision. Second, the source-supervised classifier is
inevitably biased to source data, thus it may underperform in target domain. To
alleviate these issues, we propose to simultaneously conduct feature alignment
in two individual spaces focusing on different domains, and create for each
space a domain-oriented classifier tailored specifically for that domain.
Specifically, we design a Domain-Oriented Transformer (DOT) that has two
individual classification tokens to learn different domain-oriented
representations, and two classifiers to preserve domain-wise discriminability.
Theoretical guaranteed contrastive-based alignment and the source-guided
pseudo-label refinement strategy are utilized to explore both domain-invariant
and specific information. Comprehensive experiments validate that our method
achieves state-of-the-art on several benchmarks.
Related papers
- Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Joint Distribution Alignment via Adversarial Learning for Domain
Adaptive Object Detection [11.262560426527818]
Unsupervised domain adaptive object detection aims to adapt a well-trained detector from its original source domain with rich labeled data to a new target domain with unlabeled data.
Recently, mainstream approaches perform this task through adversarial learning, yet still suffer from two limitations.
We propose a joint adaptive detection framework (JADF) to address the above challenges.
arXiv Detail & Related papers (2021-09-19T00:27:08Z) - Multi-Level Features Contrastive Networks for Unsupervised Domain
Adaptation [6.934905764152813]
Unsupervised domain adaptation aims to train a model from the labeled source domain to make predictions on the unlabeled target domain.
Existing methods tend to align the two domains directly at the domain-level, or perform class-level domain alignment based on deep feature.
In this paper, we develop this work on the method of class-level alignment.
arXiv Detail & Related papers (2021-09-14T09:23:27Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Cross-Domain Grouping and Alignment for Domain Adaptive Semantic
Segmentation [74.3349233035632]
Existing techniques to adapt semantic segmentation networks across the source and target domains within deep convolutional neural networks (CNNs) do not consider an inter-class variation within the target domain itself or estimated category.
We introduce a learnable clustering module, and a novel domain adaptation framework called cross-domain grouping and alignment.
Our method consistently boosts the adaptation performance in semantic segmentation, outperforming the state-of-the-arts on various domain adaptation settings.
arXiv Detail & Related papers (2020-12-15T11:36:21Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Differential Treatment for Stuff and Things: A Simple Unsupervised
Domain Adaptation Method for Semantic Segmentation [105.96860932833759]
State-of-the-art approaches prove that performing semantic-level alignment is helpful in tackling the domain shift issue.
We propose to improve the semantic-level alignment with different strategies for stuff regions and for things.
In addition to our proposed method, we show that our method can help ease this issue by minimizing the most similar stuff and instance features between the source and the target domains.
arXiv Detail & Related papers (2020-03-18T04:43:25Z) - Bi-Directional Generation for Unsupervised Domain Adaptation [61.73001005378002]
Unsupervised domain adaptation facilitates the unlabeled target domain relying on well-established source domain information.
Conventional methods forcefully reducing the domain discrepancy in the latent space will result in the destruction of intrinsic data structure.
We propose a Bi-Directional Generation domain adaptation model with consistent classifiers interpolating two intermediate domains to bridge source and target domains.
arXiv Detail & Related papers (2020-02-12T09:45:39Z) - Dual Adversarial Domain Adaptation [6.69797982848003]
Unsupervised domain adaptation aims at transferring knowledge from the labeled source domain to the unlabeled target domain.
Recent experiments have shown that when the discriminator is provided with domain information in both domains, it is able to preserve the complex multimodal information.
We adopt a discriminator with $2K$-dimensional output to perform both domain-level and class-level alignments simultaneously in a single discriminator.
arXiv Detail & Related papers (2020-01-01T07:10:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.