Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation
- URL: http://arxiv.org/abs/2101.01400v1
- Date: Tue, 5 Jan 2021 08:15:48 GMT
- Title: Relaxed Conditional Image Transfer for Semi-supervised Domain Adaptation
- Authors: Qijun Luo, Zhili Liu, Lanqing Hong, Chongxuan Li, Kuo Yang, Liyuan
Wang, Fengwei Zhou, Guilin Li, Zhenguo Li, Jun Zhu
- Abstract summary: Semi-supervised domain adaptation (SSDA) aims to learn models in a partially labeled target domain with the assistance of the fully labeled source domain.
We propose a conditional GAN framework to transfer images without changing the semantics in SSDA.
- Score: 46.928052360814945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Semi-supervised domain adaptation (SSDA), which aims to learn models in a
partially labeled target domain with the assistance of the fully labeled source
domain, attracts increasing attention in recent years. To explicitly leverage
the labeled data in both domains, we naturally introduce a conditional GAN
framework to transfer images without changing the semantics in SSDA. However,
we identify a label-domination problem in such an approach. In fact, the
generator tends to overlook the input source image and only memorizes
prototypes of each class, which results in unsatisfactory adaptation
performance. To this end, we propose a simple yet effective Relaxed conditional
GAN (Relaxed cGAN) framework. Specifically, we feed the image without its label
to our generator. In this way, the generator has to infer the semantic
information of input data. We formally prove that its equilibrium is desirable
and empirically validate its practical convergence and effectiveness in image
transfer. Additionally, we propose several techniques to make use of unlabeled
data in the target domain, enhancing the model in SSDA settings. We validate
our method on the well-adopted datasets: Digits, DomainNet, and Office-Home. We
achieve state-of-the-art performance on DomainNet, Office-Home and most digit
benchmarks in low-resource and high-resource settings.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - Enhancing Visual Domain Adaptation with Source Preparation [5.287588907230967]
Domain Adaptation techniques fail to consider the characteristics of the source domain itself.
We propose Source Preparation (SP), a method to mitigate source domain biases.
We show that SP enhances UDA across a range of visual domains, with improvements up to 40.64% in mIoU over baseline.
arXiv Detail & Related papers (2023-06-16T18:56:44Z) - Continual Unsupervised Domain Adaptation for Semantic Segmentation using
a Class-Specific Transfer [9.46677024179954]
segmentation models do not generalize to unseen domains.
We propose a light-weight style transfer framework that incorporates two class-conditional AdaIN layers.
We extensively validate our approach on a synthetic sequence and further propose a challenging sequence consisting of real domains.
arXiv Detail & Related papers (2022-08-12T21:30:49Z) - CA-UDA: Class-Aware Unsupervised Domain Adaptation with Optimal
Assignment and Pseudo-Label Refinement [84.10513481953583]
unsupervised domain adaptation (UDA) focuses on the selection of good pseudo-labels as surrogates for the missing labels in the target data.
source domain bias that deteriorates the pseudo-labels can still exist since the shared network of the source and target domains are typically used for the pseudo-label selections.
We propose CA-UDA to improve the quality of the pseudo-labels and UDA results with optimal assignment, a pseudo-label refinement strategy and class-aware domain alignment.
arXiv Detail & Related papers (2022-05-26T18:45:04Z) - CDTrans: Cross-domain Transformer for Unsupervised Domain Adaptation [44.06904757181245]
Unsupervised domain adaptation (UDA) aims to transfer knowledge learned from a labeled source domain to a different unlabeled target domain.
One fundamental problem for the category level based UDA is the production of pseudo labels for samples in target domain.
We design a two-way center-aware labeling algorithm to produce pseudo labels for target samples.
Along with the pseudo labels, a weight-sharing triple-branch transformer framework is proposed to apply self-attention and cross-attention for source/target feature learning and source-target domain alignment.
arXiv Detail & Related papers (2021-09-13T17:59:07Z) - Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning [86.6929930921905]
This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
arXiv Detail & Related papers (2021-04-19T08:46:08Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Cross-domain Self-supervised Learning for Domain Adaptation with Few
Source Labels [78.95901454696158]
We propose a novel Cross-Domain Self-supervised learning approach for domain adaptation.
Our method significantly boosts performance of target accuracy in the new target domain with few source labels.
arXiv Detail & Related papers (2020-03-18T15:11:07Z) - Deep Domain-Adversarial Image Generation for Domain Generalisation [115.21519842245752]
Machine learning models typically suffer from the domain shift problem when trained on a source dataset and evaluated on a target dataset of different distribution.
To overcome this problem, domain generalisation (DG) methods aim to leverage data from multiple source domains so that a trained model can generalise to unseen domains.
We propose a novel DG approach based on emphDeep Domain-Adversarial Image Generation (DDAIG)
arXiv Detail & Related papers (2020-03-12T23:17:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.