Unsupervised Domain Expansion from Multiple Sources
- URL: http://arxiv.org/abs/2005.12544v1
- Date: Tue, 26 May 2020 07:02:35 GMT
- Title: Unsupervised Domain Expansion from Multiple Sources
- Authors: Jing Zhang, Wanqing Li, Lu sheng, Chang Tang, Philip Ogunbona
- Abstract summary: This paper presents a method for unsupervised multi-source domain expansion (UMSDE) where only the pre-learned models of the source domains and unlabelled new domain data are available.
We propose to use the predicted class probability of the unlabelled data in the new domain produced by different source models to jointly mitigate the biases among domains, exploit the discriminative information in the new domain, and preserve the performance in the source domains.
- Score: 39.03086451203708
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given an existing system learned from previous source domains, it is
desirable to adapt the system to new domains without accessing and forgetting
all the previous domains in some applications. This problem is known as domain
expansion. Unlike traditional domain adaptation in which the target domain is
the domain defined by new data, in domain expansion the target domain is formed
jointly by the source domains and the new domain (hence, domain expansion) and
the label function to be learned must work for the expanded domain.
Specifically, this paper presents a method for unsupervised multi-source domain
expansion (UMSDE) where only the pre-learned models of the source domains and
unlabelled new domain data are available. We propose to use the predicted class
probability of the unlabelled data in the new domain produced by different
source models to jointly mitigate the biases among domains, exploit the
discriminative information in the new domain, and preserve the performance in
the source domains. Experimental results on the VLCS, ImageCLEF_DA and PACS
datasets have verified the effectiveness of the proposed method.
Related papers
- Meta-causal Learning for Single Domain Generalization [102.53303707563612]
Single domain generalization aims to learn a model from a single training domain (source domain) and apply it to multiple unseen test domains (target domains)
Existing methods focus on expanding the distribution of the training domain to cover the target domains, but without estimating the domain shift between the source and target domains.
We propose a new learning paradigm, namely simulate-analyze-reduce, which first simulates the domain shift by building an auxiliary domain as the target domain, then learns to analyze the causes of domain shift, and finally learns to reduce the domain shift for model adaptation.
arXiv Detail & Related papers (2023-04-07T15:46:38Z) - Aggregation of Disentanglement: Reconsidering Domain Variations in
Domain Generalization [9.577254317971933]
We argue that the domain variantions also contain useful information, ie, classification-aware information, for downstream tasks.
We propose a novel paradigm called Domain Disentanglement Network (DDN) to disentangle the domain expert features from the source domain images.
We also propound a new contrastive learning method to guide the domain expert features to form a more balanced and separable feature space.
arXiv Detail & Related papers (2023-02-05T09:48:57Z) - Unsupervised Domain Adaptation for Extra Features in the Target Domain
Using Optimal Transport [3.6042575355093907]
Most domain adaptation methods assume that the source and target domains have the same dimensionality.
In this paper, it is assumed that common features exist in both domains and that extra (new additional) features are observed in the target domain.
To leverage the homogeneity of the common features, the adaptation between these source and target domains is formulated as an optimal transport problem.
arXiv Detail & Related papers (2022-09-10T04:35:58Z) - Discovering Domain Disentanglement for Generalized Multi-source Domain
Adaptation [48.02978226737235]
A typical multi-source domain adaptation (MSDA) approach aims to transfer knowledge learned from a set of labeled source domains, to an unlabeled target domain.
We propose a variational domain disentanglement (VDD) framework, which decomposes the domain representations and semantic features for each instance by encouraging dimension-wise independence.
arXiv Detail & Related papers (2022-07-11T04:33:08Z) - FRIDA -- Generative Feature Replay for Incremental Domain Adaptation [34.00059350161178]
We propose a novel framework called Feature based Incremental Domain Adaptation (FRIDA)
For domain alignment, we propose a simple extension of the popular domain adversarial neural network (DANN) called DANN-IB.
Experiment results on Office-Home, Office-CalTech, and DomainNet datasets confirm that FRIDA maintains superior stability-plasticity trade-off than the literature.
arXiv Detail & Related papers (2021-12-28T22:24:32Z) - Domain Consistency Regularization for Unsupervised Multi-source Domain
Adaptive Classification [57.92800886719651]
Deep learning-based multi-source unsupervised domain adaptation (MUDA) has been actively studied in recent years.
domain shift in MUDA exists not only between the source and target domains but also among multiple source domains.
We propose an end-to-end trainable network that exploits domain Consistency Regularization for unsupervised Multi-source domain Adaptive classification.
arXiv Detail & Related papers (2021-06-16T07:29:27Z) - Open Domain Generalization with Domain-Augmented Meta-Learning [83.59952915761141]
We study a novel and practical problem of Open Domain Generalization (OpenDG)
We propose a Domain-Augmented Meta-Learning framework to learn open-domain generalizable representations.
Experiment results on various multi-domain datasets demonstrate that the proposed Domain-Augmented Meta-Learning (DAML) outperforms prior methods for unseen domain recognition.
arXiv Detail & Related papers (2021-04-08T09:12:24Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Mind the Gap: Enlarging the Domain Gap in Open Set Domain Adaptation [65.38975706997088]
Open set domain adaptation (OSDA) assumes the presence of unknown classes in the target domain.
We show that existing state-of-the-art methods suffer a considerable performance drop in the presence of larger domain gaps.
We propose a novel framework to specifically address the larger domain gaps.
arXiv Detail & Related papers (2020-03-08T14:20:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.