Self-Paced Learning for Open-Set Domain Adaptation
- URL: http://arxiv.org/abs/2303.05933v3
- Date: Tue, 21 Mar 2023 11:52:47 GMT
- Title: Self-Paced Learning for Open-Set Domain Adaptation
- Authors: Xinghong Liu, Yi Zhou, Tao Zhou, Jie Qin, Shengcai Liao
- Abstract summary: Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
- Score: 50.620824701934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Domain adaptation tackles the challenge of generalizing knowledge acquired
from a source domain to a target domain with different data distributions.
Traditional domain adaptation methods presume that the classes in the source
and target domains are identical, which is not always the case in real-world
scenarios. Open-set domain adaptation (OSDA) addresses this limitation by
allowing previously unseen classes in the target domain. Open-set domain
adaptation aims to not only recognize target samples belonging to common
classes shared by source and target domains but also perceive unknown class
samples. We propose a novel framework based on self-paced learning to
distinguish common and unknown class samples precisely, referred to as SPLOS
(self-paced learning for open-set). To utilize unlabeled target samples for
self-paced learning, we generate pseudo labels and design a cross-domain mixup
method tailored for OSDA scenarios. This strategy minimizes the noise from
pseudo labels and ensures our model progressively learns common class features
of the target domain, beginning with simpler examples and advancing to more
complex ones. Furthermore, unlike existing OSDA methods that require manual
hyperparameter $threshold$ tuning to separate common and unknown classes, our
approach self-tunes a suitable threshold, eliminating the need for empirical
tuning during testing. Comprehensive experiments illustrate that our method
consistently achieves superior performance on different benchmarks compared
with various state-of-the-art methods.
Related papers
- Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Learning Class and Domain Augmentations for Single-Source Open-Domain
Generalization [15.338029608652777]
Single-source open-domain generalization (SS-ODG) addresses the challenge of labeled source domains with supervision during training and unlabeled novel target domains during testing.
We propose a novel framework called SODG-Net that simultaneously synthesizes novel domains and generates pseudo-open samples.
Our approach enhances generalization by diversifying the styles of known class samples using a novel metric criterion.
arXiv Detail & Related papers (2023-11-05T08:53:07Z) - Cross-domain Contrastive Learning for Unsupervised Domain Adaptation [108.63914324182984]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a fully-labeled source domain to a different unlabeled target domain.
We build upon contrastive self-supervised learning to align features so as to reduce the domain discrepancy between training and testing sets.
arXiv Detail & Related papers (2021-06-10T06:32:30Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Cluster, Split, Fuse, and Update: Meta-Learning for Open Compound Domain
Adaptive Semantic Segmentation [102.42638795864178]
We propose a principled meta-learning based approach to OCDA for semantic segmentation.
We cluster target domain into multiple sub-target domains by image styles, extracted in an unsupervised manner.
A meta-learner is thereafter deployed to learn to fuse sub-target domain-specific predictions, conditioned upon the style code.
We learn to online update the model by model-agnostic meta-learning (MAML) algorithm, thus to further improve generalization.
arXiv Detail & Related papers (2020-12-15T13:21:54Z) - Against Adversarial Learning: Naturally Distinguish Known and Unknown in
Open Set Domain Adaptation [17.819949636876018]
Open set domain adaptation refers to the scenario that the target domain contains categories that do not exist in the source domain.
We propose an "against adversarial learning" method that can distinguish unknown target data and known data naturally.
Experimental results show that the proposed method can make significant improvement in performance compared with several state-of-the-art methods.
arXiv Detail & Related papers (2020-11-04T10:30:43Z) - Adversarial Network with Multiple Classifiers for Open Set Domain
Adaptation [9.251407403582501]
This paper focuses on the type of open set domain adaptation setting where the target domain has both private ('unknown classes') label space and the shared ('known classes') label space.
Prevalent distribution-matching domain adaptation methods are inadequate in such a setting.
We propose a novel adversarial domain adaptation model with multiple auxiliary classifiers.
arXiv Detail & Related papers (2020-07-01T11:23:07Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.