Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation
- URL: http://arxiv.org/abs/2207.05070v1
- Date: Mon, 11 Jul 2022 04:33:08 GMT
- Title: Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation
- Authors: Zixin Wang, Yadan Luo, Peng-Fei Zhang, Sen Wang, Zi Huang
- Abstract summary: A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
- Score: 48.02978226737235
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A typical multi-source domain adaptation (MSDA) approach aims to transfer
knowledge learned from a set of labeled source domains, to an unlabeled target
domain. Nevertheless, prior works strictly assume that each source domain
shares the identical group of classes with the target domain, which could
hardly be guaranteed as the target label space is not observable. In this
paper, we consider a more versatile setting of MSDA, namely Generalized
Multi-source Domain Adaptation, wherein the source domains are partially
overlapped, and the target domain is allowed to contain novel categories that
are not presented in any source domains. This new setting is more elusive than
any existing domain adaptation protocols due to the coexistence of the domain
and category shifts across the source and target domains. To address this
issue, we propose a variational domain disentanglement (VDD) framework, which
decomposes the domain representations and semantic features for each instance
by encouraging dimension-wise independence. To identify the target samples of
unknown classes, we leverage online pseudo labeling, which assigns the
pseudo-labels to unlabeled target data based on the confidence scores.
Quantitative and qualitative experiments conducted on two benchmark datasets
demonstrate the validity of the proposed framework.
Related papers
- Unsupervised Domain Adaptation for Extra Features in the Target Domain
Using Optimal Transport [3.6042575355093907]
Most domain adaptation methods assume that the source and target domains have the same dimensionality.
In this paper, it is assumed that common features exist in both domains and that extra (new additional) features are observed in the target domain.
To leverage the homogeneity of the common features, the adaptation between these source and target domains is formulated as an optimal transport problem.
arXiv Detail & Related papers (2022-09-10T04:35:58Z) - Making the Best of Both Worlds: A Domain-Oriented Transformer for
Unsupervised Domain Adaptation [31.150256154504696]
Unsupervised Domain Adaptation (UDA) has propelled the deployment of deep learning from limited experimental datasets into real-world unconstrained domains.
Most UDA approaches align features within a common embedding space and apply a shared classifier for target prediction.
We propose to simultaneously conduct feature alignment in two individual spaces focusing on different domains, and create for each space a domain-oriented classifier.
arXiv Detail & Related papers (2022-08-02T01:38:37Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Aligning Domain-specific Distribution and Classifier for Cross-domain
Classification from Multiple Sources [25.204055330850164]
We propose a new framework with two alignment stages for Unsupervised Domain Adaptation.
Our method can achieve remarkable results on popular benchmark datasets for image classification.
arXiv Detail & Related papers (2022-01-04T06:35:11Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.