Domain Adaptation and Image Classification via Deep Conditional
Adaptation Network
- URL: http://arxiv.org/abs/2006.07776v2
- Date: Tue, 3 May 2022 15:10:20 GMT
- Title: Domain Adaptation and Image Classification via Deep Conditional
Adaptation Network
- Authors: Pengfei Ge, Chuan-Xian Ren, Dao-Qing Dai, Hong Yan
- Abstract summary: Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to an unlabeled target domain.
Marginal distribution alignment of feature spaces is widely used to reduce the domain discrepancy between the source and target domains.
We propose a novel unsupervised domain adaptation method, Deep Conditional Adaptation Network (DCAN), based on conditional distribution alignment of feature spaces.
- Score: 26.09932710494144
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised domain adaptation aims to generalize the supervised model
trained on a source domain to an unlabeled target domain. Marginal distribution
alignment of feature spaces is widely used to reduce the domain discrepancy
between the source and target domains. However, it assumes that the source and
target domains share the same label distribution, which limits their
application scope. In this paper, we consider a more general application
scenario where the label distributions of the source and target domains are not
the same. In this scenario, marginal distribution alignment-based methods will
be vulnerable to negative transfer. To address this issue, we propose a novel
unsupervised domain adaptation method, Deep Conditional Adaptation Network
(DCAN), based on conditional distribution alignment of feature spaces. To be
specific, we reduce the domain discrepancy by minimizing the Conditional
Maximum Mean Discrepancy between the conditional distributions of deep features
on the source and target domains, and extract the discriminant information from
target domain by maximizing the mutual information between samples and the
prediction labels. In addition, DCAN can be used to address a special scenario,
Partial unsupervised domain adaptation, where the target domain category is a
subset of the source domain category. Experiments on both unsupervised domain
adaptation and Partial unsupervised domain adaptation show that DCAN achieves
superior classification performance over state-of-the-art methods.
Related papers
- Generalizing to Unseen Domains with Wasserstein Distributional Robustness under Limited Source Knowledge [22.285156929279207]
Domain generalization aims at learning a universal model that performs well on unseen target domains.
We propose a novel domain generalization framework called Wasserstein Distributionally Robust Domain Generalization (WDRDG)
arXiv Detail & Related papers (2022-07-11T14:46:50Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Discriminative Domain-Invariant Adversarial Network for Deep Domain
Generalization [33.84004077585957]
We propose a discriminative domain-invariant adversarial network (DDIAN) for domain generalization.
DDIAN achieves better prediction on unseen target data during training compared to state-of-the-art domain generalization approaches.
arXiv Detail & Related papers (2021-08-20T04:24:12Z) - Preserving Semantic Consistency in Unsupervised Domain Adaptation Using
Generative Adversarial Networks [33.84004077585957]
We propose an end-to-end novel semantic consistent generative adversarial network (SCGAN)
This network can achieve source to target domain matching by capturing semantic information at the feature level.
We demonstrate the robustness of our proposed method which exceeds the state-of-the-art performance in unsupervised domain adaptation settings.
arXiv Detail & Related papers (2021-04-28T12:23:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Re-energizing Domain Discriminator with Sample Relabeling for
Adversarial Domain Adaptation [88.86865069583149]
Unsupervised domain adaptation (UDA) methods exploit domain adversarial training to align the features to reduce domain gap.
In this work, we propose an efficient optimization strategy named Re-enforceable Adversarial Domain Adaptation (RADA)
RADA aims to re-energize the domain discriminator during the training by using dynamic domain labels.
arXiv Detail & Related papers (2021-03-22T08:32:55Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation [9.251407403582501]
This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
arXiv Detail & Related papers (2020-07-01T11:23:07Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.