Source-free Domain Adaptation Requires Penalized Diversity
- URL: http://arxiv.org/abs/2304.02798v2
- Date: Wed, 12 Apr 2023 15:50:35 GMT
- Title: Source-free Domain Adaptation Requires Penalized Diversity
- Authors: Laya Rafiee Sevyeri, Ivaxi Sheth, Farhood Farahnak, Alexandre See,
Samira Ebrahimi Kahou, Thomas Fevens, Mohammad Havaei
- Abstract summary: Source-free domain adaptation (SFDA) was introduced to address knowledge transfer between different domains in the absence of source data.
In unsupervised SFDA, the diversity is limited to learning a single hypothesis on the source or learning multiple hypotheses with a shared feature extractor.
We propose a novel unsupervised SFDA algorithm that promotes representational diversity through the use of separate feature extractors.
- Score: 60.04618512479438
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While neural networks are capable of achieving human-like performance in many
tasks such as image classification, the impressive performance of each model is
limited to its own dataset. Source-free domain adaptation (SFDA) was introduced
to address knowledge transfer between different domains in the absence of
source data, thus, increasing data privacy. Diversity in representation space
can be vital to a model`s adaptability in varied and difficult domains. In
unsupervised SFDA, the diversity is limited to learning a single hypothesis on
the source or learning multiple hypotheses with a shared feature extractor.
Motivated by the improved predictive performance of ensembles, we propose a
novel unsupervised SFDA algorithm that promotes representational diversity
through the use of separate feature extractors with Distinct Backbone
Architectures (DBA). Although diversity in feature space is increased, the
unconstrained mutual information (MI) maximization may potentially introduce
amplification of weak hypotheses. Thus we introduce the Weak Hypothesis
Penalization (WHP) regularizer as a mitigation strategy. Our work proposes
Penalized Diversity (PD) where the synergy of DBA and WHP is applied to
unsupervised source-free domain adaptation for covariate shift. In addition, PD
is augmented with a weighted MI maximization objective for label distribution
shift. Empirical results on natural, synthetic, and medical domains demonstrate
the effectiveness of PD under different distributional shifts.
Related papers
- Unveiling the Superior Paradigm: A Comparative Study of Source-Free Domain Adaptation and Unsupervised Domain Adaptation [52.36436121884317]
We show that Source-Free Domain Adaptation (SFDA) generally outperforms Unsupervised Domain Adaptation (UDA) in real-world scenarios.
SFDA offers advantages in time efficiency, storage requirements, targeted learning objectives, reduced risk of negative transfer, and increased robustness against overfitting.
We propose a novel weight estimation method that effectively integrates available source data into multi-SFDA approaches.
arXiv Detail & Related papers (2024-11-24T13:49:29Z) - Unified Source-Free Domain Adaptation [44.95240684589647]
In pursuit of transferring a source model to a target domain without access to the source training data, Source-Free Domain Adaptation (SFDA) has been extensively explored.
We propose a novel approach called Latent Causal Factors Discovery (LCFD)
In contrast to previous alternatives that emphasize learning the statistical description of reality, we formulate LCFD from a causality perspective.
arXiv Detail & Related papers (2024-03-12T12:40:08Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - On the Connection between Invariant Learning and Adversarial Training
for Out-of-Distribution Generalization [14.233038052654484]
deep learning models rely on spurious features, which catastrophically fail when generalized to out-of-distribution (OOD) data.
Recent work shows that Invariant Risk Minimization (IRM) is only effective for a certain type of distribution shift while it fails for other cases.
We propose Domainwise Adversarial Training ( DAT), an AT-inspired method for alleviating distribution shift by domain-specific perturbations.
arXiv Detail & Related papers (2022-12-18T13:13:44Z) - Identifiable Latent Causal Content for Domain Adaptation under Latent Covariate Shift [82.14087963690561]
Multi-source domain adaptation (MSDA) addresses the challenge of learning a label prediction function for an unlabeled target domain.
We present an intricate causal generative model by introducing latent noises across domains, along with a latent content variable and a latent style variable.
The proposed approach showcases exceptional performance and efficacy on both simulated and real-world datasets.
arXiv Detail & Related papers (2022-08-30T11:25:15Z) - Consistency and Diversity induced Human Motion Segmentation [231.36289425663702]
We propose a novel Consistency and Diversity induced human Motion (CDMS) algorithm.
Our model factorizes the source and target data into distinct multi-layer feature spaces.
A multi-mutual learning strategy is carried out to reduce the domain gap between the source and target data.
arXiv Detail & Related papers (2022-02-10T06:23:56Z) - Learning Invariant Representation with Consistency and Diversity for
Semi-supervised Source Hypothesis Transfer [46.68586555288172]
We propose a novel task named Semi-supervised Source Hypothesis Transfer (SSHT), which performs domain adaptation based on source trained model, to generalize well in target domain with a few supervisions.
We propose Consistency and Diversity Learning (CDL), a simple but effective framework for SSHT by facilitating prediction consistency between two randomly augmented unlabeled data.
Experimental results show that our method outperforms existing SSDA methods and unsupervised model adaptation methods on DomainNet, Office-Home and Office-31 datasets.
arXiv Detail & Related papers (2021-07-07T04:14:24Z) - Dynamic Domain Adaptation for Efficient Inference [12.713628738434881]
Domain adaptation (DA) enables knowledge transfer from a labeled source domain to an unlabeled target domain.
Most prior DA approaches leverage complicated and powerful deep neural networks to improve the adaptation capacity.
We propose a dynamic domain adaptation (DDA) framework, which can simultaneously achieve efficient target inference in low-resource scenarios.
arXiv Detail & Related papers (2021-03-26T08:53:16Z) - Adaptively-Accumulated Knowledge Transfer for Partial Domain Adaptation [66.74638960925854]
Partial domain adaptation (PDA) deals with a realistic and challenging problem when the source domain label space substitutes the target domain.
We propose an Adaptively-Accumulated Knowledge Transfer framework (A$2$KT) to align the relevant categories across two domains.
arXiv Detail & Related papers (2020-08-27T00:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.