Unified Source-Free Domain Adaptation
- URL: http://arxiv.org/abs/2403.07601v1
- Date: Tue, 12 Mar 2024 12:40:08 GMT
- Title: Unified Source-Free Domain Adaptation
- Authors: Song Tang, Wenxin Su, Mao Ye, Jianwei Zhang and Xiatian Zhu
- Abstract summary: In pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored.
We propose a novel approach called Latent Causal Factors Discovery (LCFD)
In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective.
- Score: 44.95240684589647
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the pursuit of transferring a source model to a target domain without
access to the source training data, Source-Free Domain Adaptation (SFDA) has
been extensively explored across various scenarios, including closed-set,
open-set, partial-set, and generalized settings. Existing methods, focusing on
specific scenarios, not only address only a subset of challenges but also
necessitate prior knowledge of the target domain, significantly limiting their
practical utility and deployability. In light of these considerations, we
introduce a more practical yet challenging problem, termed unified SFDA, which
comprehensively incorporates all specific scenarios in a unified manner. To
tackle this unified SFDA problem, we propose a novel approach called Latent
Causal Factors Discovery (LCFD). In contrast to previous alternatives that
emphasize learning the statistical description of reality, we formulate LCFD
from a causality perspective. The objective is to uncover the causal
relationships between latent variables and model decisions, enhancing the
reliability and robustness of the learned model against domain shifts. To
integrate extensive world knowledge, we leverage a pre-trained vision-language
model such as CLIP. This aids in the formation and discovery of latent causal
factors in the absence of supervision in the variation of distribution and
semantics, coupled with a newly designed information bottleneck with
theoretical guarantees. Extensive experiments demonstrate that LCFD can achieve
new state-of-the-art results in distinct SFDA settings, as well as source-free
out-of-distribution generalization.Our code and data are available at
https://github.com/tntek/source-free-domain-adaptation.
Related papers
- CoSDA: Continual Source-Free Domain Adaptation [78.47274343972904]
Without access to the source data, source-free domain adaptation (SFDA) transfers knowledge from a source-domain trained model to target domains.
Recently, SFDA has gained popularity due to the need to protect the data privacy of the source domain, but it suffers from catastrophic forgetting on the source domain due to the lack of data.
We propose a continual source-free domain adaptation approach named CoSDA, which employs a dual-speed optimized teacher-student model pair and is equipped with consistency learning capability.
arXiv Detail & Related papers (2023-04-13T15:53:23Z) - A Comprehensive Survey on Source-free Domain Adaptation [69.17622123344327]
The research of Source-Free Domain Adaptation (SFDA) has drawn growing attention in recent years.
We provide a comprehensive survey of recent advances in SFDA and organize them into a unified categorization scheme.
We compare the results of more than 30 representative SFDA methods on three popular classification benchmarks.
arXiv Detail & Related papers (2023-02-23T06:32:09Z) - Jacobian Norm for Unsupervised Source-Free Domain Adaptation [37.958884762225814]
Free domain adaptation (USFDA) aims to transfer knowledge from a well-trained source model to a related but unlabeled target domain.
Existing USFDAs turn to transfer knowledge by aligning the target feature to the latent distribution hidden in the source model.
We propose a new perspective to boost their performance.
arXiv Detail & Related papers (2022-04-07T14:30:48Z) - Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer [46.68586555288172]
We propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions.
We propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data.
Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets.
arXiv Detail & Related papers (2021-07-07T04:14:24Z) - Source-Free Domain Adaptation for Semantic Segmentation [11.722728148523366]
Unsupervised Domain Adaptation (UDA) can tackle the challenge that convolutional neural network-based approaches for semantic segmentation heavily rely on the pixel-level annotated data.
We propose a source-free domain adaptation framework for semantic segmentation, namely SFDA, in which only a well-trained source model and an unlabeled target domain dataset are available for adaptation.
arXiv Detail & Related papers (2021-03-30T14:14:29Z) - Universal Source-Free Domain Adaptation [57.37520645827318]
We propose a novel two-stage learning process for domain adaptation.
In the Procurement stage, we aim to equip the model for future source-free deployment, assuming no prior knowledge of the upcoming category-gap and domain-shift.
In the Deployment stage, the goal is to design a unified adaptation algorithm capable of operating across a wide range of category-gaps.
arXiv Detail & Related papers (2020-04-09T07:26:20Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.