Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2303.05734v1
- Date: Fri, 10 Mar 2023 06:43:55 GMT
- Title: Generative Model Based Noise Robust Training for Unsupervised Domain
Adaptation
- Authors: Zhongying Deng, Da Li, Junjun He, Yi-Zhe Song, Tao Xiang
- Abstract summary: This paper proposes a Generative model-based Noise-Robust Training method (GeNRT)
It eliminates domain shift while mitigating label noise.
Experiments on Office-Home, PACS, and Digit-Five show that our GeNRT achieves comparable performance to state-of-the-art methods.
- Score: 108.11783463263328
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Target domain pseudo-labelling has shown effectiveness in unsupervised domain
adaptation (UDA). However, pseudo-labels of unlabeled target domain data are
inevitably noisy due to the distribution shift between source and target
domains. This paper proposes a Generative model-based Noise-Robust Training
method (GeNRT), which eliminates domain shift while mitigating label noise.
GeNRT incorporates a Distribution-based Class-wise Feature Augmentation (D-CFA)
and a Generative-Discriminative classifier Consistency (GDC), both based on the
class-wise target distributions modelled by generative models. D-CFA minimizes
the domain gap by augmenting the source data with distribution-sampled target
features, and trains a noise-robust discriminative classifier by using target
domain knowledge from the generative models. GDC regards all the class-wise
generative models as generative classifiers and enforces a consistency
regularization between the generative and discriminative classifiers. It
exploits an ensemble of target knowledge from all the generative models to
train a noise-robust discriminative classifier and eventually gets
theoretically linked to the Ben-David domain adaptation theorem for reducing
the domain gap. Extensive experiments on Office-Home, PACS, and Digit-Five show
that our GeNRT achieves comparable performance to state-of-the-art methods
under single-source and multi-source UDA settings.
Related papers
- ProtoGMM: Multi-prototype Gaussian-Mixture-based Domain Adaptation Model for Semantic Segmentation [0.8213829427624407]
Domain adaptive semantic segmentation aims to generate accurate and dense predictions for an unlabeled target domain.
We propose the ProtoGMM model, which incorporates the GMM into contrastive losses to perform guided contrastive learning.
To achieve increased intra-class semantic similarity, decreased inter-class similarity, and domain alignment between the source and target domains, we employ multi-prototype contrastive learning.
arXiv Detail & Related papers (2024-06-27T14:50:50Z) - CNG-SFDA:Clean-and-Noisy Region Guided Online-Offline Source-Free Domain Adaptation [6.222371087167951]
Source-Free Domain Adaptation (SFDA) aims to adopt a trained model on the source domain to the target domain.
handling false labels in the target domain is crucial because they negatively impact the model performance.
We conduct extensive experiments on multiple datasets in online/offline SFDA settings, whose results demonstrate that our method, CNG-SFDA, achieves state-of-the-art for most cases.
arXiv Detail & Related papers (2024-01-26T01:29:37Z) - Divide and Contrast: Source-free Domain Adaptation via Adaptive
Contrastive Learning [122.62311703151215]
Divide and Contrast (DaC) aims to connect the good ends of both worlds while bypassing their limitations.
DaC divides the target data into source-like and target-specific samples, where either group of samples is treated with tailored goals.
We further align the source-like domain with the target-specific samples using a memory bank-based Maximum Mean Discrepancy (MMD) loss to reduce the distribution mismatch.
arXiv Detail & Related papers (2022-11-12T09:21:49Z) - Polycentric Clustering and Structural Regularization for Source-free
Unsupervised Domain Adaptation [20.952542421577487]
Source-Free Domain Adaptation (SFDA) aims to solve the domain adaptation problem by transferring the knowledge learned from a pre-trained source model to an unseen target domain.
Most existing methods assign pseudo-labels to the target data by generating feature prototypes.
In this paper, a novel framework named PCSR is proposed to tackle SFDA via a novel intra-class Polycentric Clustering and Structural Regularization strategy.
arXiv Detail & Related papers (2022-10-14T02:20:48Z) - Adaptive Domain Generalization via Online Disagreement Minimization [17.215683606365445]
Domain Generalization aims to safely transfer a model to unseen target domains.
AdaODM adaptively modifies the source model at test time for different target domains.
Results show AdaODM stably improves the generalization capacity on unseen domains.
arXiv Detail & Related papers (2022-08-03T11:51:11Z) - Deep face recognition with clustering based domain adaptation [57.29464116557734]
We propose a new clustering-based domain adaptation method designed for face recognition task in which the source and target domain do not share any classes.
Our method effectively learns the discriminative target feature by aligning the feature domain globally, and, at the meantime, distinguishing the target clusters locally.
arXiv Detail & Related papers (2022-05-27T12:29:11Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Towards Uncovering the Intrinsic Data Structures for Unsupervised Domain
Adaptation using Structurally Regularized Deep Clustering [119.88565565454378]
Unsupervised domain adaptation (UDA) is to learn classification models that make predictions for unlabeled data on a target domain.
We propose a hybrid model of Structurally Regularized Deep Clustering, which integrates the regularized discriminative clustering of target data with a generative one.
Our proposed H-SRDC outperforms all the existing methods under both the inductive and transductive settings.
arXiv Detail & Related papers (2020-12-08T08:52:00Z) - Towards Fair Cross-Domain Adaptation via Generative Learning [50.76694500782927]
Domain Adaptation (DA) targets at adapting a model trained over the well-labeled source domain to the unlabeled target domain lying in different distributions.
We develop a novel Generative Few-shot Cross-domain Adaptation (GFCA) algorithm for fair cross-domain classification.
arXiv Detail & Related papers (2020-03-04T23:25:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.