Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification
- URL: http://arxiv.org/abs/2106.08590v1
- Date: Wed, 16 Jun 2021 07:29:27 GMT
- Title: Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification
- Authors: Zhipeng Luo, Xiaobing Zhang, Shijian Lu, Shuai Yi
- Abstract summary: Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
- Score: 57.92800886719651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based multi-source unsupervised domain adaptation (MUDA) has
been actively studied in recent years. Compared with single-source unsupervised
domain adaptation (SUDA), domain shift in MUDA exists not only between the
source and target domains but also among multiple source domains. Most existing
MUDA algorithms focus on extracting domain-invariant representations among all
domains whereas the task-specific decision boundaries among classes are largely
neglected. In this paper, we propose an end-to-end trainable network that
exploits domain Consistency Regularization for unsupervised Multi-source domain
Adaptive classification (CRMA). CRMA aligns not only the distributions of each
pair of source and target domains but also that of all domains. For each pair
of source and target domains, we employ an intra-domain consistency to
regularize a pair of domain-specific classifiers to achieve intra-domain
alignment. In addition, we design an inter-domain consistency that targets
joint inter-domain alignment among all domains. To address different
similarities between multiple source domains and the target domain, we design
an authorization strategy that assigns different authorities to domain-specific
classifiers adaptively for optimal pseudo label prediction and self-training.
Extensive experiments show that CRMA tackles unsupervised domain adaptation
effectively under a multi-source setup and achieves superior adaptation
consistently across multiple MUDA datasets.
Related papers
- Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - Dynamic Instance Domain Adaptation [109.53575039217094]
Most studies on unsupervised domain adaptation assume that each domain's training samples come with domain labels.
We develop a dynamic neural network with adaptive convolutional kernels to generate instance-adaptive residuals to adapt domain-agnostic deep features to each individual instance.
Our model, dubbed DIDA-Net, achieves state-of-the-art performance on several commonly used single-source and multi-source UDA datasets.
arXiv Detail & Related papers (2022-03-09T20:05:54Z) - Bridging the Source-to-target Gap for Cross-domain Person
Re-Identification with Intermediate Domains [63.23373987549485]
Cross-domain person re-identification (re-ID) aims to transfer the identity-discriminative knowledge from the source to the target domain.
We propose an Intermediate Domain Module (IDM) and a Mirrors Generation Module (MGM)
IDM generates multiple intermediate domains by mixing the hidden-layer features from source and target domains.
MGM is introduced by mapping the features into the IDM-generated intermediate domains without changing their original identity.
arXiv Detail & Related papers (2022-03-03T12:44:56Z) - Aligning Domain-specific Distribution and Classifier for Cross-domain
Classification from Multiple Sources [25.204055330850164]
We propose a new framework with two alignment stages for Unsupervised Domain Adaptation.
Our method can achieve remarkable results on popular benchmark datasets for image classification.
arXiv Detail & Related papers (2022-01-04T06:35:11Z) - Universal Multi-Source Domain Adaptation [17.045689789877926]
Unsupervised domain adaptation enables intelligent models to transfer knowledge from a labeled source domain to a similar but unlabeled target domain.
Recent study reveals that knowledge can be transferred from one source domain to another unknown target domain, called Universal Domain Adaptation (UDA)
We propose a universal multi-source adaptation network (UMAN) to solve the domain adaptation problem without increasing the complexity of the model.
arXiv Detail & Related papers (2020-11-05T00:20:38Z) - Mutual Learning Network for Multi-Source Domain Adaptation [73.25974539191553]
We propose a novel multi-source domain adaptation method, Mutual Learning Network for Multiple Source Domain Adaptation (ML-MSDA)
Under the framework of mutual learning, the proposed method pairs the target domain with each single source domain to train a conditional adversarial domain adaptation network as a branch network.
The proposed method outperforms the comparison methods and achieves the state-of-the-art performance.
arXiv Detail & Related papers (2020-03-29T04:31:43Z) - MADAN: Multi-source Adversarial Domain Aggregation Network for Domain
Adaptation [58.38749495295393]
Domain adaptation aims to learn a transferable model to bridge the domain shift between one labeled source domain and another sparsely labeled or unlabeled target domain.
Recent multi-source domain adaptation (MDA) methods do not consider the pixel-level alignment between sources and target.
We propose a novel MDA framework to address these challenges.
arXiv Detail & Related papers (2020-02-19T21:22:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.