Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning
- URL: http://arxiv.org/abs/2104.09136v1
- Date: Mon, 19 Apr 2021 08:46:08 GMT
- Title: Semi-Supervised Domain Adaptation with Prototypical Alignment and
Consistency Learning
- Authors: Kai Li, Chang Liu, Handong Zhao, Yulun Zhang, Yun Fu
- Abstract summary: This paper studies how much it can help address domain shifts if we further have a few target samples labeled.
To explore the full potential of landmarks, we incorporate a prototypical alignment (PA) module which calculates a target prototype for each class from the landmarks.
Specifically, we severely perturb the labeled images, making PA non-trivial to achieve and thus promoting model generalizability.
- Score: 86.6929930921905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain adaptation enhances generalizability of a model across domains with
domain shifts. Most research effort has been spent on Unsupervised Domain
Adaption (UDA) which trains a model jointly with labeled source data and
unlabeled target data. This paper studies how much it can help address domain
shifts if we further have a few target samples (e.g., one sample per class)
labeled. This is the so-called semi-supervised domain adaptation (SSDA) problem
and the few labeled target samples are termed as ``landmarks''. To explore the
full potential of landmarks, we incorporate a prototypical alignment (PA)
module which calculates a target prototype for each class from the landmarks;
source samples are then aligned with the target prototype from the same class.
To further alleviate label scarcity, we propose a data augmentation based
solution. Specifically, we severely perturb the labeled images, making PA
non-trivial to achieve and thus promoting model generalizability. Moreover, we
apply consistency learning on unlabeled target images, by perturbing each image
with light transformations and strong transformations. Then, the strongly
perturbed image can enjoy ``supervised-like'' training using the pseudo label
inferred from the lightly perturbed one. Experiments show that the proposed
method, though simple, reaches significant performance gains over
state-of-the-art methods, and enjoys the flexibility of being able to serve as
a plug-and-play component to various existing UDA methods and improve
adaptation performance with landmarks provided. Our code is available at
\url{https://github.com/kailigo/pacl}.
Related papers
- AdaptDiff: Cross-Modality Domain Adaptation via Weak Conditional Semantic Diffusion for Retinal Vessel Segmentation [10.958821619282748]
We present an unsupervised domain adaptation (UDA) method named AdaptDiff.
It enables a retinal vessel segmentation network trained on fundus photography (FP) to produce satisfactory results on unseen modalities.
Our results demonstrate a significant improvement in segmentation performance across all unseen datasets.
arXiv Detail & Related papers (2024-10-06T23:04:29Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Semi-supervised Domain Adaptation via Prototype-based Multi-level
Learning [4.232614032390374]
In semi-supervised domain adaptation (SSDA), a few labeled target samples of each class help the model to transfer knowledge representation from the fully labeled source domain to the target domain.
We propose a Prototype-based Multi-level Learning (ProML) framework to better tap the potential of labeled target samples.
arXiv Detail & Related papers (2023-05-04T10:09:30Z) - Robust Target Training for Multi-Source Domain Adaptation [110.77704026569499]
We propose a novel Bi-level Optimization based Robust Target Training (BORT$2$) method for MSDA.
Our proposed method achieves the state of the art performance on three MSDA benchmarks, including the large-scale DomainNet dataset.
arXiv Detail & Related papers (2022-10-04T15:20:01Z) - Style Mixing and Patchwise Prototypical Matching for One-Shot
Unsupervised Domain Adaptive Semantic Segmentation [21.01132797297286]
In one-shot unsupervised domain adaptation, segmentors only see one unlabeled target image during training.
We propose a new OSUDA method that can effectively relieve such computational burden.
Our method achieves new state-of-the-art performance on two commonly used benchmarks for domain adaptive semantic segmentation.
arXiv Detail & Related papers (2021-12-09T02:47:46Z) - Your Classifier can Secretly Suffice Multi-Source Domain Adaptation [72.47706604261992]
Multi-Source Domain Adaptation (MSDA) deals with the transfer of task knowledge from multiple labeled source domains to an unlabeled target domain.
We present a different perspective to MSDA wherein deep models are observed to implicitly align the domains under label supervision.
arXiv Detail & Related papers (2021-03-20T12:44:13Z) - Surprisingly Simple Semi-Supervised Domain Adaptation with Pretraining
and Consistency [93.89773386634717]
Visual domain adaptation involves learning to classify images from a target visual domain using labels available in a different source domain.
We show that in the presence of a few target labels, simple techniques like self-supervision (via rotation prediction) and consistency regularization can be effective without any adversarial alignment to learn a good target classifier.
Our Pretraining and Consistency (PAC) approach, can achieve state of the art accuracy on this semi-supervised domain adaptation task, surpassing multiple adversarial domain alignment methods, across multiple datasets.
arXiv Detail & Related papers (2021-01-29T18:40:17Z) - Synthetic-to-Real Domain Adaptation for Lane Detection [5.811502603310248]
We explore learning from abundant, randomly generated synthetic data, together with unlabeled or partially labeled target domain data.
This poses the challenge of adapting models learned on the unrealistic synthetic domain to real images.
We develop a novel autoencoder-based approach that uses synthetic labels unaligned with particular images for adapting to target domain data.
arXiv Detail & Related papers (2020-07-08T10:54:21Z) - Alleviating Semantic-level Shift: A Semi-supervised Domain Adaptation
Method for Semantic Segmentation [97.8552697905657]
A key challenge of this task is how to alleviate the data distribution discrepancy between the source and target domains.
We propose Alleviating Semantic-level Shift (ASS), which can successfully promote the distribution consistency from both global and local views.
We apply our ASS to two domain adaptation tasks, from GTA5 to Cityscapes and from Synthia to Cityscapes.
arXiv Detail & Related papers (2020-04-02T03:25:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.