Mining Label Distribution Drift in Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2006.09565v3
- Date: Mon, 9 Oct 2023 07:10:20 GMT
- Title: Mining Label Distribution Drift in Unsupervised Domain Adaptation
- Authors: Peizhao Li, Zhengming Ding, Hongfu Liu
- Abstract summary: We propose Label distribution Matching Domain Adversarial Network (LMDAN) to handle data distribution shift and label distribution drift jointly.
Experiments show that LMDAN delivers superior performance under considerable label distribution drift.
- Score: 78.2452946757045
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Unsupervised domain adaptation targets to transfer task-related knowledge
from labeled source domain to unlabeled target domain. Although tremendous
efforts have been made to minimize domain divergence, most existing methods
only partially manage by aligning feature representations from diverse domains.
Beyond the discrepancy in data distribution, the gap between source and target
label distribution, recognized as label distribution drift, is another crucial
factor raising domain divergence, and has been under insufficient exploration.
From this perspective, we first reveal how label distribution drift brings
negative influence. Next, we propose Label distribution Matching Domain
Adversarial Network (LMDAN) to handle data distribution shift and label
distribution drift jointly. In LMDAN, label distribution drift is addressed by
a source sample weighting strategy, which selects samples that contribute to
positive adaptation and avoid adverse effects brought by the mismatched
samples. Experiments show that LMDAN delivers superior performance under
considerable label distribution drift.
Related papers
- Prototypical Partial Optimal Transport for Universal Domain Adaptation [48.07871397146472]
Universal domain adaptation (UniDA) aims to transfer knowledge from a labeled source domain to an unlabeled target domain.
The existence of domain and category shift makes the task challenging and requires us to distinguish "known" samples and "unknown" samples.
A novel approach, dubbed mini-batch Prototypical Partial Optimal Transport (m-PPOT), is proposed to conduct partial distribution alignment for UniDA.
arXiv Detail & Related papers (2024-08-02T08:08:56Z) - GeT: Generative Target Structure Debiasing for Domain Adaptation [67.17025068995835]
Domain adaptation (DA) aims to transfer knowledge from a fully labeled source to a scarcely labeled or totally unlabeled target under domain shift.
Recently, semi-supervised learning-based (SSL) techniques that leverage pseudo labeling have been increasingly used in DA.
In this paper, we propose GeT that learns a non-bias target embedding distribution with high quality pseudo labels.
arXiv Detail & Related papers (2023-08-20T08:52:43Z) - Semi-Supervised Domain Adaptation by Similarity based Pseudo-label
Injection [0.735996217853436]
One of the primary challenges in Semi-supervised Domain Adaptation (SSDA) is the skewed ratio between the number of labeled source and target samples.
Recent works in SSDA show that aligning only the labeled target samples with the source samples potentially leads to incomplete domain alignment of the target domain to the source domain.
In our approach, to align the two domains, we leverage contrastive losses to learn a semantically meaningful and a domain agnostic feature space.
arXiv Detail & Related papers (2022-09-05T10:28:08Z) - Learning Unbiased Transferability for Domain Adaptation by Uncertainty
Modeling [107.24387363079629]
Domain adaptation aims to transfer knowledge from a labeled source domain to an unlabeled or a less labeled but related target domain.
Due to the imbalance between the amount of annotated data in the source and target domains, only the target distribution is aligned to the source domain.
We propose a non-intrusive Unbiased Transferability Estimation Plug-in (UTEP) by modeling the uncertainty of a discriminator in adversarial-based DA methods to optimize unbiased transfer.
arXiv Detail & Related papers (2022-06-02T21:58:54Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Unsupervised domain adaptation via double classifiers based on high
confidence pseudo label [8.132250810529873]
Unsupervised domain adaptation (UDA) aims to solve the problem of knowledge transfer from labeled source domain to unlabeled target domain.
Many domain adaptation (DA) methods use centroid to align the local distribution of different domains, that is, to align different classes.
This work rethinks what is the alignment between different domains, and studies how to achieve the real alignment between different domains.
arXiv Detail & Related papers (2021-05-11T00:51:31Z) - Discriminative Cross-Domain Feature Learning for Partial Domain
Adaptation [70.45936509510528]
Partial domain adaptation aims to adapt knowledge from a larger and more diverse source domain to a smaller target domain with less number of classes.
Recent practice on domain adaptation manages to extract effective features by incorporating the pseudo labels for the target domain.
It is essential to align target data with only a small set of source data.
arXiv Detail & Related papers (2020-08-26T03:18:53Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Domain Adaptation and Image Classification via Deep Conditional
Adaptation Network [26.09932710494144]
Unsupervised domain adaptation aims to generalize the supervised model trained on a source domain to an unlabeled target domain.
Marginal distribution alignment of feature spaces is widely used to reduce the domain discrepancy between the source and target domains.
We propose a novel unsupervised domain adaptation method, Deep Conditional Adaptation Network (DCAN), based on conditional distribution alignment of feature spaces.
arXiv Detail & Related papers (2020-06-14T02:56:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.