Transcending Domains through Text-to-Image Diffusion: A Source-Free
Approach to Domain Adaptation
- URL: http://arxiv.org/abs/2310.01701v4
- Date: Tue, 6 Feb 2024 21:50:42 GMT
- Title: Transcending Domains through Text-to-Image Diffusion: A Source-Free
Approach to Domain Adaptation
- Authors: Shivang Chopra, Suraj Kothawade, Houda Aynaou, Aman Chadha
- Abstract summary: Domain Adaptation (DA) is a method for enhancing a model's performance on a target domain with inadequate annotated data.
We propose a novel framework for SFDA that generates source data using a text-to-image diffusion model trained on the target domain samples.
- Score: 6.649910168731417
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Domain Adaptation (DA) is a method for enhancing a model's performance on a
target domain with inadequate annotated data by applying the information the
model has acquired from a related source domain with sufficient labeled data.
The escalating enforcement of data-privacy regulations like HIPAA, COPPA,
FERPA, etc. have sparked a heightened interest in adapting models to novel
domains while circumventing the need for direct access to the source data, a
problem known as Source-Free Domain Adaptation (SFDA). In this paper, we
propose a novel framework for SFDA that generates source data using a
text-to-image diffusion model trained on the target domain samples. Our method
starts by training a text-to-image diffusion model on the labeled target domain
samples, which is then fine-tuned using the pre-trained source model to
generate samples close to the source data. Finally, we use Domain Adaptation
techniques to align the artificially generated source data with the target
domain data, resulting in significant performance improvements of the model on
the target domain. Through extensive comparison against several baselines on
the standard Office-31, Office-Home, and VisDA benchmarks, we demonstrate the
effectiveness of our approach for the SFDA task.
Related papers
- Style Adaptation for Domain-adaptive Semantic Segmentation [2.1365683052370046]
Domain discrepancy leads to a significant decrease in the performance of general network models trained on the source domain data when applied to the target domain.
We introduce a straightforward approach to mitigate the domain discrepancy, which necessitates no additional parameter calculations and seamlessly integrates with self-training-based UDA methods.
Our proposed method attains a noteworthy UDA performance of 76.93 mIoU on the GTA->Cityscapes dataset, representing a notable improvement of +1.03 percentage points over the previous state-of-the-art results.
arXiv Detail & Related papers (2024-04-25T02:51:55Z) - Open-Set Domain Adaptation with Visual-Language Foundation Models [51.49854335102149]
Unsupervised domain adaptation (UDA) has proven to be very effective in transferring knowledge from a source domain to a target domain with unlabeled data.
Open-set domain adaptation (ODA) has emerged as a potential solution to identify these classes during the training phase.
arXiv Detail & Related papers (2023-07-30T11:38:46Z) - SF-FSDA: Source-Free Few-Shot Domain Adaptive Object Detection with
Efficient Labeled Data Factory [94.11898696478683]
Domain adaptive object detection aims to leverage the knowledge learned from a labeled source domain to improve the performance on an unlabeled target domain.
We propose and investigate a more practical and challenging domain adaptive object detection problem under both source-free and few-shot conditions, named as SF-FSDA.
arXiv Detail & Related papers (2023-06-07T12:34:55Z) - Domain Alignment Meets Fully Test-Time Adaptation [24.546705919244936]
A foundational requirement of a deployed ML model is to generalize to data drawn from a testing distribution that is different from training.
In this paper, we focus on a challenging variant of this problem, where access to the original source data is restricted.
We propose a new approach, CATTAn, that bridges UDA and FTTA, by relaxing the need to access entire source data.
arXiv Detail & Related papers (2022-07-09T03:17:19Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Instance Level Affinity-Based Transfer for Unsupervised Domain
Adaptation [74.71931918541748]
We propose an instance affinity based criterion for source to target transfer during adaptation, called ILA-DA.
We first propose a reliable and efficient method to extract similar and dissimilar samples across source and target, and utilize a multi-sample contrastive loss to drive the domain alignment process.
We verify the effectiveness of ILA-DA by observing consistent improvements in accuracy over popular domain adaptation approaches on a variety of benchmark datasets.
arXiv Detail & Related papers (2021-04-03T01:33:14Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.