Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment
- URL: http://arxiv.org/abs/2303.04393v2
- Date: Thu, 9 Mar 2023 01:29:06 GMT
- Title: Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment
- Authors: Jinghan Ru and Jun Tian and Zhekai Du and Chengwei Xiao and Jingjing
Li and Heng Tao Shen
- Abstract summary: Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
- Score: 58.56087979262192
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multimedia applications are often associated with cross-domain knowledge
transfer, where Unsupervised Domain Adaptation (UDA) can be used to reduce the
domain shifts. Open Set Domain Adaptation (OSDA) aims to transfer knowledge
from a well-labeled source domain to an unlabeled target domain under the
assumption that the target domain contains unknown classes. Existing OSDA
methods consistently lay stress on the covariate shift, ignoring the potential
label shift problem. The performance of OSDA methods degrades drastically under
intra-domain class imbalance and inter-domain label shift. However, little
attention has been paid to this issue in the community. In this paper, the
Imbalanced Open Set Domain Adaptation (IOSDA) is explored where the covariate
shift, label shift and category mismatch exist simultaneously. To alleviate the
negative effects raised by label shift in OSDA, we propose Open-set
Moving-threshold Estimation and Gradual Alignment (OMEGA) - a novel
architecture that improves existing OSDA methods on class-imbalanced data.
Specifically, a novel unknown-aware target clustering scheme is proposed to
form tight clusters in the target domain to reduce the negative effects of
label shift and intra-domain class imbalance. Furthermore, moving-threshold
estimation is designed to generate specific thresholds for each target sample
rather than using one for all. Extensive experiments on IOSDA, OSDA and OPDA
benchmarks demonstrate that our method could significantly outperform existing
state-of-the-arts. Code and data are available at
https://github.com/mendicant04/OMEGA.
Related papers
- Inter-Domain Mixup for Semi-Supervised Domain Adaptation [108.40945109477886]
Semi-supervised domain adaptation (SSDA) aims to bridge source and target domain distributions, with a small number of target labels available.
Existing SSDA work fails to make full use of label information from both source and target domains for feature alignment across domains.
This paper presents a novel SSDA approach, Inter-domain Mixup with Neighborhood Expansion (IDMNE), to tackle this issue.
arXiv Detail & Related papers (2024-01-21T10:20:46Z) - centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Upcycling Models under Domain and Category Shift [95.22147885947732]
We introduce an innovative global and local clustering learning technique (GLC)
We design a novel, adaptive one-vs-all global clustering algorithm to achieve the distinction across different target classes.
Remarkably, in the most challenging open-partial-set DA scenario, GLC outperforms UMAD by 14.8% on the VisDA benchmark.
arXiv Detail & Related papers (2023-03-13T13:44:04Z) - Domain Adaptation under Open Set Label Shift [39.424134505152544]
We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
arXiv Detail & Related papers (2022-07-26T17:09:48Z) - Unsupervised Domain Adaptation with Progressive Adaptation of Subspaces [26.080102941802107]
Unsupervised Domain Adaptation (UDA) aims to classify unlabeled target domain by transferring knowledge from labeled source domain with domain shift.
We propose a novel UDA method named Progressive Adaptation of Subspaces approach (PAS) in which we utilize such an intuition to gradually obtain reliable pseudo labels.
Our thorough evaluation demonstrates that PAS is not only effective for common UDA, but also outperforms the state-of-the arts for more challenging Partial Domain Adaptation (PDA) situation.
arXiv Detail & Related papers (2020-09-01T15:40:50Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Domain Adaptive Semantic Segmentation Using Weak Labels [115.16029641181669]
We propose a novel framework for domain adaptation in semantic segmentation with image-level weak labels in the target domain.
We develop a weak-label classification module to enforce the network to attend to certain categories.
In experiments, we show considerable improvements with respect to the existing state-of-the-arts in UDA and present a new benchmark in the WDA setting.
arXiv Detail & Related papers (2020-07-30T01:33:57Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.