Domain-Guided Conditional Diffusion Model for Unsupervised Domain
Adaptation
- URL: http://arxiv.org/abs/2309.14360v1
- Date: Sat, 23 Sep 2023 07:09:44 GMT
- Title: Domain-Guided Conditional Diffusion Model for Unsupervised Domain
Adaptation
- Authors: Yulong Zhang, Shuhao Chen, Weisen Jiang, Yu Zhang, Jiangang Lu, and
James T. Kwok
- Abstract summary: We propose DomAin-guided Conditional Diffusion Model (DACDM) to generate high-fidelity and diversity samples for the target domain.
The generated samples help existing UDA methods transfer from the source domain to the target domain more easily, thus improving the transfer performance.
- Score: 23.668005880581248
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limited transferability hinders the performance of deep learning models when
applied to new application scenarios. Recently, Unsupervised Domain Adaptation
(UDA) has achieved significant progress in addressing this issue via learning
domain-invariant features. However, the performance of existing UDA methods is
constrained by the large domain shift and limited target domain data. To
alleviate these issues, we propose DomAin-guided Conditional Diffusion Model
(DACDM) to generate high-fidelity and diversity samples for the target domain.
In the proposed DACDM, by introducing class information, the labels of
generated samples can be controlled, and a domain classifier is further
introduced in DACDM to guide the generated samples for the target domain. The
generated samples help existing UDA methods transfer from the source domain to
the target domain more easily, thus improving the transfer performance.
Extensive experiments on various benchmarks demonstrate that DACDM brings a
large improvement to the performance of existing UDA methods.
Related papers
- Revisiting the Domain Shift and Sample Uncertainty in Multi-source
Active Domain Transfer [69.82229895838577]
Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a new target domain by actively selecting a limited number of target data to annotate.
This setting neglects the more practical scenario where training data are collected from multiple sources.
This motivates us to target a new and challenging setting of knowledge transfer that extends ADA from a single source domain to multiple source domains.
arXiv Detail & Related papers (2023-11-21T13:12:21Z) - Unsupervised Domain Adaptation via Domain-Adaptive Diffusion [31.802163238282343]
Unsupervised Domain Adaptation (UDA) is quite challenging due to the large distribution discrepancy between the source domain and the target domain.
Inspired by diffusion models which have strong capability to gradually convert data distributions across a large gap, we consider to explore the diffusion technique to handle the challenging UDA task.
Our method outperforms the current state-of-the-arts by a large margin on three widely used UDA datasets.
arXiv Detail & Related papers (2023-08-26T14:28:18Z) - AVATAR: Adversarial self-superVised domain Adaptation network for TARget
domain [11.764601181046496]
This paper presents an unsupervised domain adaptation (UDA) method for predicting unlabeled target domain data.
We propose the Adversarial self-superVised domain Adaptation network for the TARget domain (AVATAR) algorithm.
Our proposed model significantly outperforms state-of-the-art methods on three UDA benchmarks.
arXiv Detail & Related papers (2023-04-28T20:31:56Z) - Diffusion-based Target Sampler for Unsupervised Domain Adaptation [5.025971841729201]
Large domain shifts and the sample scarcity in the target domain make existing UDA methods achieve suboptimal performance.
We propose a plug-and-play Diffusion-based Target Sampler (DTS) to generate high fidelity and diversity pseudo target samples.
The generated samples can well simulate the data distribution of the target domain and help existing UDA methods transfer from the source domain to the target domain more easily.
arXiv Detail & Related papers (2023-03-17T02:07:43Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - A New Bidirectional Unsupervised Domain Adaptation Segmentation
Framework [27.13101555533594]
unsupervised domain adaptation (UDA) techniques are proposed to bridge the gap between different domains.
In this paper, we propose a bidirectional UDA framework based on disentangled representation learning for equally competent two-way UDA performances.
arXiv Detail & Related papers (2021-08-18T05:25:11Z) - ConDA: Continual Unsupervised Domain Adaptation [0.0]
Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place.
Current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice.
This paper considers a more realistic scenario, where target data become available in smaller batches and adaptation on the entire target domain is not feasible.
arXiv Detail & Related papers (2021-03-19T23:20:41Z) - Class-Incremental Domain Adaptation [56.72064953133832]
We introduce a practical Domain Adaptation (DA) paradigm called Class-Incremental Domain Adaptation (CIDA)
Existing DA methods tackle domain-shift but are unsuitable for learning novel target-domain classes.
Our approach yields superior performance as compared to both DA and CI methods in the CIDA paradigm.
arXiv Detail & Related papers (2020-08-04T07:55:03Z) - Dual Distribution Alignment Network for Generalizable Person
Re-Identification [174.36157174951603]
Domain generalization (DG) serves as a promising solution to handle person Re-Identification (Re-ID)
We present a Dual Distribution Alignment Network (DDAN) which handles this challenge by selectively aligning distributions of multiple source domains.
We evaluate our DDAN on a large-scale Domain Generalization Re-ID (DG Re-ID) benchmark.
arXiv Detail & Related papers (2020-07-27T00:08:07Z) - Multi-source Domain Adaptation in the Deep Learning Era: A Systematic
Survey [53.656086832255944]
Multi-source domain adaptation (MDA) is a powerful extension in which the labeled data may be collected from multiple sources.
MDA has attracted increasing attention in both academia and industry.
arXiv Detail & Related papers (2020-02-26T08:07:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.