On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain
Adaptation
- URL: http://arxiv.org/abs/2202.00796v3
- Date: Wed, 31 May 2023 15:46:27 GMT
- Title: On Balancing Bias and Variance in Unsupervised Multi-Source-Free Domain
Adaptation
- Authors: Maohao Shen, Yuheng Bu, Gregory Wornell
- Abstract summary: Methods for multi-source-free domain adaptation (MSFDA) typically train a target model using pseudo-labeled data produced by the source models.
We develop an information-theoretic bound on the generalization error of the resulting target model.
We then provide insights on how to balance this trade-off from three perspectives, including domain aggregation, selective pseudo-labeling, and joint feature alignment.
- Score: 6.2200089460762085
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to privacy, storage, and other constraints, there is a growing need for
unsupervised domain adaptation techniques in machine learning that do not
require access to the data used to train a collection of source models.
Existing methods for multi-source-free domain adaptation (MSFDA) typically
train a target model using pseudo-labeled data produced by the source models,
which focus on improving the pseudo-labeling techniques or proposing new
training objectives. Instead, we aim to analyze the fundamental limits of
MSFDA. In particular, we develop an information-theoretic bound on the
generalization error of the resulting target model, which illustrates an
inherent bias-variance trade-off. We then provide insights on how to balance
this trade-off from three perspectives, including domain aggregation, selective
pseudo-labeling, and joint feature alignment, which leads to the design of
novel algorithms. Experiments on multiple datasets validate our theoretical
analysis and demonstrate the state-of-art performance of the proposed
algorithm, especially on some of the most challenging datasets, including
Office-Home and DomainNet.
Related papers
- Non-stationary Domain Generalization: Theory and Algorithm [11.781050299571692]
In this paper, we study domain generalization in non-stationary environment.
We first examine the impact of environmental non-stationarity on model performance.
Then, we propose a novel algorithm based on adaptive invariant representation learning.
arXiv Detail & Related papers (2024-05-10T21:32:43Z) - Subject-Based Domain Adaptation for Facial Expression Recognition [51.10374151948157]
Adapting a deep learning model to a specific target individual is a challenging facial expression recognition task.
This paper introduces a new MSDA method for subject-based domain adaptation in FER.
It efficiently leverages information from multiple source subjects to adapt a deep FER model to a single target individual.
arXiv Detail & Related papers (2023-12-09T18:40:37Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - A Novel Mix-normalization Method for Generalizable Multi-source Person
Re-identification [49.548815417844786]
Person re-identification (Re-ID) has achieved great success in the supervised scenario.
It is difficult to directly transfer the supervised model to arbitrary unseen domains due to the model overfitting to the seen source domains.
We propose MixNorm, which consists of domain-aware mix-normalization (DMN) and domain-ware center regularization (DCR)
arXiv Detail & Related papers (2022-01-24T18:09:38Z) - T-SVDNet: Exploring High-Order Prototypical Correlations for
Multi-Source Domain Adaptation [41.356774580308986]
We propose a novel approach named T-SVDNet to address the task of Multi-source Domain Adaptation.
High-order correlations among multiple domains and categories are fully explored so as to better bridge the domain gap.
To avoid negative transfer brought by noisy source data, we propose a novel uncertainty-aware weighting strategy.
arXiv Detail & Related papers (2021-07-30T06:33:05Z) - A Review of Single-Source Deep Unsupervised Visual Domain Adaptation [81.07994783143533]
Large-scale labeled training datasets have enabled deep neural networks to excel across a wide range of benchmark vision tasks.
In many applications, it is prohibitively expensive and time-consuming to obtain large quantities of labeled data.
To cope with limited labeled training data, many have attempted to directly apply models trained on a large-scale labeled source domain to another sparsely labeled or unlabeled target domain.
arXiv Detail & Related papers (2020-09-01T00:06:50Z) - Towards Inheritable Models for Open-Set Domain Adaptation [56.930641754944915]
We introduce a practical Domain Adaptation paradigm where a source-trained model is used to facilitate adaptation in the absence of the source dataset in future.
We present an objective way to quantify inheritability to enable the selection of the most suitable source model for a given target domain, even in the absence of the source data.
arXiv Detail & Related papers (2020-04-09T07:16:30Z) - Diversity-Based Generalization for Unsupervised Text Classification
under Domain Shift [16.522910268114504]
We propose a novel method for domain adaptation of single-task text classification problems based on a simple but effective idea of diversity-based generalization.
Our results demonstrate that machine learning architectures that ensure sufficient diversity can generalize better.
arXiv Detail & Related papers (2020-02-25T15:11:02Z) - Do We Really Need to Access the Source Data? Source Hypothesis Transfer
for Unsupervised Domain Adaptation [102.67010690592011]
Unsupervised adaptationUDA (UDA) aims to leverage the knowledge learned from a labeled source dataset to solve similar tasks in a new unlabeled domain.
Prior UDA methods typically require to access the source data when learning to adapt the model.
This work tackles a practical setting where only a trained source model is available and how we can effectively utilize such a model without source data to solve UDA problems.
arXiv Detail & Related papers (2020-02-20T03:13:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.