Generalized Domain Conditioned Adaptation Network
- URL: http://arxiv.org/abs/2103.12339v1
- Date: Tue, 23 Mar 2021 06:24:26 GMT
- Title: Generalized Domain Conditioned Adaptation Network
- Authors: Shuang Li, Binhui Xie, Qiuxia Lin, Chi Harold Liu, Gao Huang and
Guoren Wang
- Abstract summary: Domain Adaptation (DA) attempts to transfer knowledge learned in labeled source domain to the unlabeled but related target domain.
Recent advances in DA mainly proceed by aligning the source and target distributions.
We develop Generalized Domain Conditioned Adaptation Network (GDCAN) to automatically determine whether domain channel activations should be separately modeled in each attention module.
- Score: 33.13337928537281
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain Adaptation (DA) attempts to transfer knowledge learned in the labeled
source domain to the unlabeled but related target domain without requiring
large amounts of target supervision. Recent advances in DA mainly proceed by
aligning the source and target distributions. Despite the significant success,
the adaptation performance still degrades accordingly when the source and
target domains encounter a large distribution discrepancy. We consider this
limitation may attribute to the insufficient exploration of domain-specialized
features because most studies merely concentrate on domain-general feature
learning in task-specific layers and integrate totally-shared convolutional
networks (convnets) to generate common features for both domains. In this
paper, we relax the completely-shared convnets assumption adopted by previous
DA methods and propose Domain Conditioned Adaptation Network (DCAN), which
introduces domain conditioned channel attention module with a multi-path
structure to separately excite channel activation for each domain. Such a
partially-shared convnets module allows domain-specialized features in
low-level to be explored appropriately. Further, given the knowledge
transferability varying along with convolutional layers, we develop Generalized
Domain Conditioned Adaptation Network (GDCAN) to automatically determine
whether domain channel activations should be separately modeled in each
attention module. Afterward, the critical domain-specialized knowledge could be
adaptively extracted according to the domain statistic gaps. As far as we know,
this is the first work to explore the domain-wise convolutional channel
activations separately for deep DA networks. Additionally, to effectively match
high-level feature distributions across domains, we consider deploying feature
adaptation blocks after task-specific layers, which can explicitly mitigate the
domain discrepancy.
Related papers
- Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - From Big to Small: Adaptive Learning to Partial-Set Domains [94.92635970450578]
Domain adaptation targets at knowledge acquisition and dissemination from a labeled source domain to an unlabeled target domain under distribution shift.
Recent advances show that deep pre-trained models of large scale endow rich knowledge to tackle diverse downstream tasks of small scale.
This paper introduces Partial Domain Adaptation (PDA), a learning paradigm that relaxes the identical class space assumption to that the source class space subsumes the target class space.
arXiv Detail & Related papers (2022-03-14T07:02:45Z) - Exploiting Domain-Specific Features to Enhance Domain Generalization [10.774902700296249]
Domain Generalization (DG) aims to train a model, from multiple observed source domains, in order to perform well on unseen target domains.
Prior DG approaches have focused on extracting domain-invariant information across sources to generalize on target domains.
We propose meta-Domain Specific-Domain Invariant (mD) - a novel theoretically sound framework.
arXiv Detail & Related papers (2021-10-18T15:42:39Z) - Self-Adversarial Disentangling for Specific Domain Adaptation [52.1935168534351]
Domain adaptation aims to bridge the domain shifts between the source and target domains.
Recent methods typically do not consider explicit prior knowledge on a specific dimension.
arXiv Detail & Related papers (2021-08-08T02:36:45Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Disentanglement-based Cross-Domain Feature Augmentation for Effective
Unsupervised Domain Adaptive Person Re-identification [87.72851934197936]
Unsupervised domain adaptive (UDA) person re-identification (ReID) aims to transfer the knowledge from the labeled source domain to the unlabeled target domain for person matching.
One challenge is how to generate target domain samples with reliable labels for training.
We propose a Disentanglement-based Cross-Domain Feature Augmentation strategy.
arXiv Detail & Related papers (2021-03-25T15:28:41Z) - Domain Conditioned Adaptation Network [90.63261870610211]
We propose a Domain Conditioned Adaptation Network (DCAN) to excite distinct convolutional channels with a domain conditioned channel attention mechanism.
This is the first work to explore the domain-wise convolutional channel activation for deep DA networks.
arXiv Detail & Related papers (2020-05-14T04:23:24Z) - Deep Residual Correction Network for Partial Domain Adaptation [79.27753273651747]
Deep domain adaptation methods have achieved appealing performance by learning transferable representations from a well-labeled source domain to a different but related unlabeled target domain.
This paper proposes an efficiently-implemented Deep Residual Correction Network (DRCN)
Comprehensive experiments on partial, traditional and fine-grained cross-domain visual recognition demonstrate that DRCN is superior to the competitive deep domain adaptation approaches.
arXiv Detail & Related papers (2020-04-10T06:07:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.