A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation
- URL: http://arxiv.org/abs/2003.02541v2
- Date: Thu, 16 Jul 2020 11:04:44 GMT
- Title: A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation
- Authors: Jian Liang, Yunbo Wang, Dapeng Hu, Ran He, and Jiashi Feng
- Abstract summary: This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
- Score: 142.31610972922067
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work addresses the unsupervised domain adaptation problem, especially in
the case of class labels in the target domain being only a subset of those in
the source domain. Such a partial transfer setting is realistic but challenging
and existing methods always suffer from two key problems, negative transfer and
uncertainty propagation. In this paper, we build on domain adversarial learning
and propose a novel domain adaptation method BA$^3$US with two new techniques
termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty
Suppression (AUS), respectively. On one hand, negative transfer results in
misclassification of target samples to the classes only present in the source
domain. To address this issue, BAA pursues the balance between label
distributions across domains in a fairly simple manner. Specifically, it
randomly leverages a few source samples to augment the smaller target domain
during domain alignment so that classes in different domains are symmetric. On
the other hand, a source sample would be denoted as uncertain if there is an
incorrect class that has a relatively high prediction score, and such
uncertainty easily propagates to unlabeled target data around it during
alignment, which severely deteriorates adaptation performance. Thus we present
AUS that emphasizes uncertain samples and exploits an adaptive weighted
complement entropy objective to encourage incorrect classes to have uniform and
low prediction scores. Experimental results on multiple benchmarks demonstrate
our BA$^3$US surpasses state-of-the-arts for partial domain adaptation tasks.
Code is available at \url{https://github.com/tim-learn/BA3US}.
Related papers
- Bi-discriminator Domain Adversarial Neural Networks with Class-Level
Gradient Alignment [87.8301166955305]
We propose a novel bi-discriminator domain adversarial neural network with class-level gradient alignment.
BACG resorts to gradient signals and second-order probability estimation for better alignment of domain distributions.
In addition, inspired by contrastive learning, we develop a memory bank-based variant, i.e. Fast-BACG, which can greatly shorten the training process.
arXiv Detail & Related papers (2023-10-21T09:53:17Z) - centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Heterogeneous Domain Adaptation with Positive and Unlabeled Data [7.48285579561564]
This paper addresses a new challenging setting called positive and unlabeled heterogeneous unsupervised domain adaptation (PU-HUDA)
A naive combination of existing HUDA and PU learning methods is ineffective in PU-HUDA due to the gap in label distribution between the source and target domains.
We propose a novel method, predictive adversarial domain adaptation (PADA), which can predict likely positive examples from the unlabeled target data.
arXiv Detail & Related papers (2023-04-17T02:50:18Z) - Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Certainty Volume Prediction for Unsupervised Domain Adaptation [35.984559137218504]
Unsupervised domain adaptation (UDA) deals with the problem of classifying unlabeled target domain data.
We propose a novel uncertainty-aware domain adaptation setup that models uncertainty as a multivariate Gaussian distribution in feature space.
We evaluate our proposed pipeline on challenging UDA datasets and achieve state-of-the-art results.
arXiv Detail & Related papers (2021-11-03T11:22:55Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - Hard Class Rectification for Domain Adaptation [36.58361356407803]
Domain adaptation (DA) aims to transfer knowledge from a label-rich domain (source domain) to a label-scare domain (target domain)
We propose a novel framework, called Hard Class Rectification Pseudo-labeling (HCRPL), to alleviate the hard class problem.
The proposed method is evaluated in both unsupervised domain adaptation (UDA) and semi-supervised domain adaptation (SSDA)
arXiv Detail & Related papers (2020-08-08T06:21:58Z) - Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift [11.873435088539459]
This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
arXiv Detail & Related papers (2020-01-22T06:41:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.