Domain Adaptation under Open Set Label Shift
- URL: http://arxiv.org/abs/2207.13048v1
- Date: Tue, 26 Jul 2022 17:09:48 GMT
- Title: Domain Adaptation under Open Set Label Shift
- Authors: Saurabh Garg, Sivaraman Balakrishnan, Zachary C. Lipton
- Abstract summary: We introduce the problem of domain adaptation under Open Set Label Shift (OSLS)
OSLS subsumes domain adaptation under label shift and Positive-Unlabeled (PU) learning.
We propose practical methods for both tasks that leverage black-box predictors.
- Score: 39.424134505152544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce the problem of domain adaptation under Open Set Label Shift
(OSLS) where the label distribution can change arbitrarily and a new class may
arrive during deployment, but the class-conditional distributions p(x|y) are
domain-invariant. OSLS subsumes domain adaptation under label shift and
Positive-Unlabeled (PU) learning. The learner's goals here are two-fold: (a)
estimate the target label distribution, including the novel class; and (b)
learn a target classifier. First, we establish necessary and sufficient
conditions for identifying these quantities. Second, motivated by advances in
label shift and PU learning, we propose practical methods for both tasks that
leverage black-box predictors. Unlike typical Open Set Domain Adaptation (OSDA)
problems, which tend to be ill-posed and amenable only to heuristics, OSLS
offers a well-posed problem amenable to more principled machinery. Experiments
across numerous semi-synthetic benchmarks on vision, language, and medical
datasets demonstrate that our methods consistently outperform OSDA baselines,
achieving 10--25% improvements in target domain accuracy. Finally, we analyze
the proposed methods, establishing finite-sample convergence to the true label
marginal and convergence to optimal classifier for linear models in a Gaussian
setup. Code is available at https://github.com/acmi-lab/Open-Set-Label-Shift.
Related papers
- Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - RLSbench: Domain Adaptation Under Relaxed Label Shift [39.845383643588356]
We introduce RLSbench, a large-scale benchmark for relaxed label shift.
We assess 13 popular domain adaptation methods, demonstrating more widespread failures under label proportion shifts than were previously known.
We develop an effective two-step meta-algorithm that is compatible with most domain adaptations.
arXiv Detail & Related papers (2023-02-06T18:57:14Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Domain Adaptation with Auxiliary Target Domain-Oriented Classifier [115.39091109079622]
Domain adaptation aims to transfer knowledge from a label-rich but heterogeneous domain to a label-scare domain.
One of the most popular SSL techniques is pseudo-labeling that assigns pseudo labels for each unlabeled data.
We propose a new pseudo-labeling framework called Auxiliary Target Domain-Oriented (ATDOC)
ATDOC alleviates the bias by introducing an auxiliary classifier for target data only, to improve the quality of pseudo labels.
arXiv Detail & Related papers (2020-07-08T15:01:35Z) - Domain Adaptation with Conditional Distribution Matching and Generalized
Label Shift [20.533804144992207]
Adversarial learning has demonstrated good performance in the unsupervised domain adaptation setting.
We propose a new assumption, generalized label shift ($GLS$), to improve robustness against mismatched label distributions.
Our algorithms outperform the base versions, with vast improvements for large label distribution mismatches.
arXiv Detail & Related papers (2020-03-10T00:35:23Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.