Unveiling Class-Labeling Structure for Universal Domain Adaptation
- URL: http://arxiv.org/abs/2010.04873v1
- Date: Sat, 10 Oct 2020 02:13:02 GMT
- Title: Unveiling Class-Labeling Structure for Universal Domain Adaptation
- Authors: Yueming Yin, Zhen Yang (Senior Member, IEEE), Xiaofu Wu, and Haifeng
Hu
- Abstract summary: We employ a probabilistic approach for locating the common label set, where each source class may come from the common label set with a probability.
We propose a simple universal adaptation network (S-UAN) by incorporating the probabilistic structure for the common label set.
Experiments indicate that S-UAN works well in different UDA settings and outperforms the state-of-the-art methods by large margins.
- Score: 12.411096265140479
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As a more practical setting for unsupervised domain adaptation, Universal
Domain Adaptation (UDA) is recently introduced, where the target label set is
unknown. One of the big challenges in UDA is how to determine the common label
set shared by source and target domains, as there is simply no labeling
available in the target domain. In this paper, we employ a probabilistic
approach for locating the common label set, where each source class may come
from the common label set with a probability. In particular, we propose a novel
approach for evaluating the probability of each source class from the common
label set, where this probability is computed by the prediction margin
accumulated over the whole target domain. Then, we propose a simple universal
adaptation network (S-UAN) by incorporating the probabilistic structure for the
common label set. Finally, we analyse the generalization bound focusing on the
common label set and explore the properties on the target risk for UDA.
Extensive experiments indicate that S-UAN works well in different UDA settings
and outperforms the state-of-the-art methods by large margins.
Related papers
- Harnessing Hierarchical Label Distribution Variations in Test Agnostic Long-tail Recognition [114.96385572118042]
We argue that the variation in test label distributions can be broken down hierarchically into global and local levels.
We propose a new MoE strategy, $mathsfDirMixE$, which assigns experts to different Dirichlet meta-distributions of the label distribution.
We show that our proposed objective benefits from enhanced generalization by virtue of the variance-based regularization.
arXiv Detail & Related papers (2024-05-13T14:24:56Z) - Uncertainty-guided Open-Set Source-Free Unsupervised Domain Adaptation with Target-private Class Segregation [22.474866164542302]
UDA approaches commonly assume that source and target domains share the same labels space.
This paper considers the more challenging Source-Free Open-set Domain Adaptation (SF-OSDA) setting.
We propose a novel approach for SF-OSDA that exploits the granularity of target-private categories by segregating their samples into multiple unknown classes.
arXiv Detail & Related papers (2024-04-16T13:52:00Z) - Universal Semi-Supervised Domain Adaptation by Mitigating Common-Class Bias [16.4249819402209]
We introduce Universal Semi-Supervised Domain Adaptation (UniSSDA)
UniSSDA is at the intersection of Universal Domain Adaptation (UniDA) and Semi-Supervised Domain Adaptation (SSDA)
We propose a new prior-guided pseudo-label refinement strategy to reduce the reinforcement of common-class bias due to pseudo-labeling.
arXiv Detail & Related papers (2024-03-17T14:43:47Z) - Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - Generalized Universal Domain Adaptation with Generative Flow Networks [76.1350941965148]
Generalized Universal Domain Adaptation aims to achieve precise prediction of all target labels including unknown categories.
GUDA bridges the gap between label distribution shift-based and label space mismatch-based variants.
We propose an active domain adaptation algorithm named GFlowDA, which selects diverse samples with probabilities proportional to a reward function.
arXiv Detail & Related papers (2023-05-08T05:34:15Z) - Self-Paced Learning for Open-Set Domain Adaptation [50.620824701934]
Traditional domain adaptation methods presume that the classes in the source and target domains are identical.
Open-set domain adaptation (OSDA) addresses this limitation by allowing previously unseen classes in the target domain.
We propose a novel framework based on self-paced learning to distinguish common and unknown class samples.
arXiv Detail & Related papers (2023-03-10T14:11:09Z) - Unified Optimal Transport Framework for Universal Domain Adaptation [27.860165056943796]
Universal Domain Adaptation (UniDA) aims to transfer knowledge from a source domain to a target domain without any constraints on label sets.
Most existing methods require manually specified or hand-tuned threshold values to detect common samples.
We propose to use Optimal Transport (OT) to handle these issues under a unified framework, namely UniOT.
arXiv Detail & Related papers (2022-10-31T05:07:09Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - A Sample Selection Approach for Universal Domain Adaptation [94.80212602202518]
We study the problem of unsupervised domain adaption in the universal scenario.
Only some of the classes are shared between the source and target domains.
We present a scoring scheme that is effective in identifying the samples of the shared classes.
arXiv Detail & Related papers (2020-01-14T22:28:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.