Generation, augmentation, and alignment: A pseudo-source domain based
method for source-free domain adaptation
- URL: http://arxiv.org/abs/2109.04015v1
- Date: Thu, 9 Sep 2021 03:21:58 GMT
- Title: Generation, augmentation, and alignment: A pseudo-source domain based
method for source-free domain adaptation
- Authors: Yuntao Du, Haiyang Yang, Mingcai Chen, Juan Jiang, Hongtao Luo,
Chongjun Wang
- Abstract summary: Methods need to access both labeled source samples and unlabeled target samples simultaneously to train the model.
In this paper, inspired by this observation, we propose a novel method based on the pseudo-source domain.
The results on three real-world datasets verify the effectiveness of the proposed method.
- Score: 2.774526723254576
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Conventional unsupervised domain adaptation (UDA) methods need to access both
labeled source samples and unlabeled target samples simultaneously to train the
model. While in some scenarios, the source samples are not available for the
target domain due to data privacy and safety. To overcome this challenge,
recently, source-free domain adaptation (SFDA) has attracted the attention of
researchers, where both a trained source model and unlabeled target samples are
given. Existing SFDA methods either adopt a pseudo-label based strategy or
generate more samples. However, these methods do not explicitly reduce the
distribution shift across domains, which is the key to a good adaptation.
Although there are no source samples available, fortunately, we find that some
target samples are very similar to the source domain and can be used to
approximate the source domain. This approximated domain is denoted as the
pseudo-source domain. In this paper, inspired by this observation, we propose a
novel method based on the pseudo-source domain. The proposed method firstly
generates and augments the pseudo-source domain, and then employs distribution
alignment with four novel losses based on pseudo-label based strategy. Among
them, a domain adversarial loss is introduced between the pseudo-source domain
the remaining target domain to reduce the distribution shift. The results on
three real-world datasets verify the effectiveness of the proposed method.
Related papers
- Noisy Universal Domain Adaptation via Divergence Optimization for Visual
Recognition [30.31153237003218]
A novel scenario named Noisy UniDA is proposed to transfer knowledge from a labeled source domain to an unlabeled target domain.
A multi-head convolutional neural network framework is proposed to address all of the challenges faced in the Noisy UniDA at once.
arXiv Detail & Related papers (2023-04-20T14:18:38Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - MiddleGAN: Generate Domain Agnostic Samples for Unsupervised Domain
Adaptation [35.00283311401667]
We propose to let the classifier that performs the final classification task on the target domain learn implicitly the invariant features to perform classification.
It is achieved via feeding the classifier during training generated fake samples that are similar to samples from both the source and target domains.
We propose a novel variation of generative adversarial networks (GAN), called the MiddleGAN, that generates fake samples that are similar to samples from both the source and target domains.
arXiv Detail & Related papers (2022-11-06T15:09:36Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Low-confidence Samples Matter for Domain Adaptation [47.552605279925736]
Domain adaptation (DA) aims to transfer knowledge from a label-rich source domain to a related but label-scarce target domain.
We propose a novel contrastive learning method by processing low-confidence samples.
We evaluate the proposed method in both unsupervised and semi-supervised DA settings.
arXiv Detail & Related papers (2022-02-06T15:45:45Z) - Source-Free Domain Adaptive Fundus Image Segmentation with Denoised
Pseudo-Labeling [56.98020855107174]
Domain adaptation typically requires to access source domain data to utilize their distribution information for domain alignment with the target data.
In many real-world scenarios, the source data may not be accessible during the model adaptation in the target domain due to privacy issue.
We present a novel denoised pseudo-labeling method for this problem, which effectively makes use of the source model and unlabeled target data.
arXiv Detail & Related papers (2021-09-19T06:38:21Z) - Inferring Latent Domains for Unsupervised Deep Domain Adaptation [54.963823285456925]
Unsupervised Domain Adaptation (UDA) refers to the problem of learning a model in a target domain where labeled data are not available.
This paper introduces a novel deep architecture which addresses the problem of UDA by automatically discovering latent domains in visual datasets.
We evaluate our approach on publicly available benchmarks, showing that it outperforms state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-03-25T14:33:33Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z) - Unsupervised Domain Adaptation in the Absence of Source Data [0.7366405857677227]
We propose an unsupervised method for adapting a source classifier to a target domain that varies from the source domain along natural axes.
We validate our method in scenarios where the distribution shift involves brightness, contrast, and rotation and show that it outperforms fine-tuning baselines in scenarios with limited labeled data.
arXiv Detail & Related papers (2020-07-20T16:22:14Z) - Sparsely-Labeled Source Assisted Domain Adaptation [64.75698236688729]
This paper proposes a novel Sparsely-Labeled Source Assisted Domain Adaptation (SLSA-DA) algorithm.
Due to the label scarcity problem, the projected clustering is conducted on both the source and target domains.
arXiv Detail & Related papers (2020-05-08T15:37:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.