Exploiting Both Domain-specific and Invariant Knowledge via a Win-win
Transformer for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2111.12941v1
- Date: Thu, 25 Nov 2021 06:45:07 GMT
- Title: Exploiting Both Domain-specific and Invariant Knowledge via a Win-win
Transformer for Unsupervised Domain Adaptation
- Authors: Wenxuan Ma and Jinming Zhang and Shuang Li and Chi Harold Liu and
Yulin Wang and Wei Li
- Abstract summary: Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
Most existing UDA approaches enable knowledge transfer via learning domain-invariant representation and sharing one classifier across two domains.
We propose a Win-Win TRansformer framework (WinTR) that separately explores the domain-specific knowledge for each domain and interchanges cross-domain knowledge.
- Score: 14.623272346517794
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised Domain Adaptation (UDA) aims to transfer knowledge from a
labeled source domain to an unlabeled target domain. Most existing UDA
approaches enable knowledge transfer via learning domain-invariant
representation and sharing one classifier across two domains. However, ignoring
the domain-specific information that are related to the task, and forcing a
unified classifier to fit both domains will limit the feature expressiveness in
each domain. In this paper, by observing that the Transformer architecture with
comparable parameters can generate more transferable representations than CNN
counterparts, we propose a Win-Win TRansformer framework (WinTR) that
separately explores the domain-specific knowledge for each domain and meanwhile
interchanges cross-domain knowledge. Specifically, we learn two different
mappings using two individual classification tokens in the Transformer, and
design for each one a domain-specific classifier. The cross-domain knowledge is
transferred via source guided label refinement and single-sided feature
alignment with respect to source or target, which keeps the integrity of
domain-specific information. Extensive experiments on three benchmark datasets
show that our method outperforms the state-of-the-art UDA methods, validating
the effectiveness of exploiting both domain-specific and invariant
Related papers
- Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Domain Invariant Masked Autoencoders for Self-supervised Learning from
Multi-domains [73.54897096088149]
We propose a Domain-invariant Masked AutoEncoder (DiMAE) for self-supervised learning from multi-domains.
The core idea is to augment the input image with style noise from different domains and then reconstruct the image from the embedding of the augmented image.
Experiments on PACS and DomainNet illustrate that DiMAE achieves considerable gains compared with recent state-of-the-art methods.
arXiv Detail & Related papers (2022-05-10T09:49:40Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Topic Driven Adaptive Network for Cross-Domain Sentiment Classification [6.196375060616161]
We propose a Topic Driven Adaptive Network (TDAN) for cross-domain sentiment classification.
The network consists of two sub-networks: semantics attention network and domain-specific word attention network.
Experiments validate the effectiveness of our TDAN on sentiment classification across domains.
arXiv Detail & Related papers (2021-11-28T10:17:11Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Adversarial Dual Distinct Classifiers for Unsupervised Domain Adaptation [67.83872616307008]
Unversarial Domain adaptation (UDA) attempts to recognize the unlabeled target samples by building a learning model from a differently-distributed labeled source domain.
In this paper, we propose a novel Adrial Dual Distincts Network (AD$2$CN) to align the source and target domain data distribution simultaneously with matching task-specific category boundaries.
To be specific, a domain-invariant feature generator is exploited to embed the source and target data into a latent common space with the guidance of discriminative cross-domain alignment.
arXiv Detail & Related papers (2020-08-27T01:29:10Z) - Domain2Vec: Domain Embedding for Unsupervised Domain Adaptation [56.94873619509414]
Conventional unsupervised domain adaptation studies the knowledge transfer between a limited number of domains.
We propose a novel Domain2Vec model to provide vectorial representations of visual domains based on joint learning of feature disentanglement and Gram matrix.
We demonstrate that our embedding is capable of predicting domain similarities that match our intuition about visual relations between different domains.
arXiv Detail & Related papers (2020-07-17T22:05:09Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.