Diffusion-based Target Sampler for Unsupervised Domain Adaptation
- URL: http://arxiv.org/abs/2303.12724v1
- Date: Fri, 17 Mar 2023 02:07:43 GMT
- Title: Diffusion-based Target Sampler for Unsupervised Domain Adaptation
- Authors: Yulong Zhang, Shuhao Chen, Yu Zhang, Jiangang Lu
- Abstract summary: Large domain shifts and the sample scarcity in the target domain make existing UDA methods achieve suboptimal performance.
We propose a plug-and-play Diffusion-based Target Sampler (DTS) to generate high fidelity and diversity pseudo target samples.
The generated samples can well simulate the data distribution of the target domain and help existing UDA methods transfer from the source domain to the target domain more easily.
- Score: 5.025971841729201
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Limited transferability hinders the performance of deep learning models when
applied to new application scenarios. Recently, unsupervised domain adaptation
(UDA) has achieved significant progress in addressing this issue via learning
domain-invariant features. However, large domain shifts and the sample scarcity
in the target domain make existing UDA methods achieve suboptimal performance.
To alleviate these issues, we propose a plug-and-play Diffusion-based Target
Sampler (DTS) to generate high fidelity and diversity pseudo target samples. By
introducing class-conditional information, the labels of the generated target
samples can be controlled. The generated samples can well simulate the data
distribution of the target domain and help existing UDA methods transfer from
the source domain to the target domain more easily, thus improving the transfer
performance. Extensive experiments on various benchmarks demonstrate that the
performance of existing UDA methods can be greatly improved through the
proposed DTS method.
Related papers
- Domain-Guided Conditional Diffusion Model for Unsupervised Domain
Adaptation [23.668005880581248]
We propose DomAin-guided Conditional Diffusion Model (DACDM) to generate high-fidelity and diversity samples for the target domain.
The generated samples help existing UDA methods transfer from the source domain to the target domain more easily, thus improving the transfer performance.
arXiv Detail & Related papers (2023-09-23T07:09:44Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - Self-training through Classifier Disagreement for Cross-Domain Opinion
Target Extraction [62.41511766918932]
Opinion target extraction (OTE) or aspect extraction (AE) is a fundamental task in opinion mining.
Recent work focus on cross-domain OTE, which is typically encountered in real-world scenarios.
We propose a new SSL approach that opts for selecting target samples whose model output from a domain-specific teacher and student network disagrees on the unlabelled target data.
arXiv Detail & Related papers (2023-02-28T16:31:17Z) - MADAv2: Advanced Multi-Anchor Based Active Domain Adaptation
Segmentation [98.09845149258972]
We introduce active sample selection to assist domain adaptation regarding the semantic segmentation task.
With only a little workload to manually annotate these samples, the distortion of the target-domain distribution can be effectively alleviated.
A powerful semi-supervised domain adaptation strategy is proposed to alleviate the long-tail distribution problem.
arXiv Detail & Related papers (2023-01-18T07:55:22Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - Multi-Anchor Active Domain Adaptation for Semantic Segmentation [25.93409207335442]
Unsupervised domain adaption has proven to be an effective approach for alleviating the intensive workload of manual annotation.
We propose to introduce a novel multi-anchor based active learning strategy to assist domain adaptation regarding the semantic segmentation task.
arXiv Detail & Related papers (2021-08-18T07:33:13Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Dynamic Domain Adaptation for Efficient Inference [12.713628738434881]
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain.
Most prior DA approaches leverage complicated and powerful deep neural networks to improve the adaptation capacity.
We propose a dynamic domain adaptation (DDA) framework, which can simultaneously achieve efficient target inference in low-resource scenarios.
arXiv Detail & Related papers (2021-03-26T08:53:16Z) - Stochastic Adversarial Gradient Embedding for Active Domain Adaptation [4.514832807541817]
Unlabelled Domain Adaptation (UDA) aims to bridge the gap between a source domain, where labelled data are available, and a target domain only represented with unsupervised data.
This paper addresses this problem by using active learning to annotate a small budget of target data.
We introduce textitStochastic Adversarial Gradient Embedding (SAGE), a framework that makes a triple contribution to ADA.
arXiv Detail & Related papers (2020-12-03T11:28:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.