Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift
- URL: http://arxiv.org/abs/2001.07895v3
- Date: Sat, 25 Jan 2020 05:41:25 GMT
- Title: Partially-Shared Variational Auto-encoders for Unsupervised Domain
Adaptation with Target Shift
- Authors: Ryuhei Takahashi, Atsushi Hashimoto, Motoharu Sonogashira, Masaaki
Iiyama
- Abstract summary: This paper proposes a novel approach for unsupervised domain adaptation (UDA) with target shift.
The proposed method, partially shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment instead of feature distribution matching.
PS-VAEs inter-convert domain of each sample by a CycleGAN-based architecture while preserving its label-related content.
- Score: 11.873435088539459
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes a novel approach for unsupervised domain adaptation (UDA)
with target shift. Target shift is a problem of mismatch in label distribution
between source and target domains. Typically it appears as class-imbalance in
target domain. In practice, this is an important problem in UDA; as we do not
know labels in target domain datasets, we do not know whether or not its
distribution is identical to that in the source domain dataset. Many
traditional approaches achieve UDA with distribution matching by minimizing
mean maximum discrepancy or adversarial training; however these approaches
implicitly assume a coincidence in the distributions and do not work under
situations with target shift. Some recent UDA approaches focus on class
boundary and some of them are robust to target shift, but they are only
applicable to classification and not to regression.
To overcome the target shift problem in UDA, the proposed method, partially
shared variational autoencoders (PS-VAEs), uses pair-wise feature alignment
instead of feature distribution matching. PS-VAEs inter-convert domain of each
sample by a CycleGAN-based architecture while preserving its label-related
content. To evaluate the performance of PS-VAEs, we carried out two
experiments: UDA with class-unbalanced digits datasets (classification), and
UDA from synthesized data to real observation in human-pose-estimation
(regression). The proposed method presented its robustness against the
class-imbalance in the classification task, and outperformed the other methods
in the regression task with a large margin.
Related papers
- centroIDA: Cross-Domain Class Discrepancy Minimization Based on
Accumulative Class-Centroids for Imbalanced Domain Adaptation [17.97306640457707]
We propose a cross-domain class discrepancy minimization method based on accumulative class-centroids for IDA (centroIDA)
A series of experiments have proved that our method outperforms other SOTA methods on IDA problem, especially with the increasing degree of label shift.
arXiv Detail & Related papers (2023-08-21T10:35:32Z) - Imbalanced Open Set Domain Adaptation via Moving-threshold Estimation
and Gradual Alignment [58.56087979262192]
Open Set Domain Adaptation (OSDA) aims to transfer knowledge from a well-labeled source domain to an unlabeled target domain.
The performance of OSDA methods degrades drastically under intra-domain class imbalance and inter-domain label shift.
We propose Open-set Moving-threshold Estimation and Gradual Alignment (OMEGA) to alleviate the negative effects raised by label shift.
arXiv Detail & Related papers (2023-03-08T05:55:02Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - CLDA: Contrastive Learning for Semi-Supervised Domain Adaptation [1.2691047660244335]
Unsupervised Domain Adaptation (UDA) aims to align the labeled source distribution with the unlabeled target distribution to obtain domain invariant predictive models.
We propose Contrastive Learning framework for semi-supervised Domain Adaptation (CLDA) that attempts to bridge the intra-domain gap.
CLDA achieves state-of-the-art results on all the above datasets.
arXiv Detail & Related papers (2021-06-30T20:23:19Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Learning Target Domain Specific Classifier for Partial Domain Adaptation [85.71584004185031]
Unsupervised domain adaptation (UDA) aims at reducing the distribution discrepancy when transferring knowledge from a labeled source domain to an unlabeled target domain.
This paper focuses on a more realistic UDA scenario, where the target label space is subsumed to the source label space.
arXiv Detail & Related papers (2020-08-25T02:28:24Z) - A Balanced and Uncertainty-aware Approach for Partial Domain Adaptation [142.31610972922067]
This work addresses the unsupervised domain adaptation problem, especially in the case of class labels in the target domain being only a subset of those in the source domain.
We build on domain adversarial learning and propose a novel domain adaptation method BA$3$US with two new techniques termed Balanced Adversarial Alignment (BAA) and Adaptive Uncertainty Suppression (AUS)
Experimental results on multiple benchmarks demonstrate our BA$3$US surpasses state-of-the-arts for partial domain adaptation tasks.
arXiv Detail & Related papers (2020-03-05T11:37:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.