Adaptive Test-Time Personalization for Federated Learning
- URL: http://arxiv.org/abs/2310.18816v1
- Date: Sat, 28 Oct 2023 20:42:47 GMT
- Title: Adaptive Test-Time Personalization for Federated Learning
- Authors: Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He
- Abstract summary: We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
- Score: 51.25437606915392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning algorithms have shown promising results in
adapting models to various distribution shifts. However, most of these methods
require labeled data on testing clients for personalization, which is usually
unavailable in real-world scenarios. In this paper, we introduce a novel
setting called test-time personalized federated learning (TTPFL), where clients
locally adapt a global model in an unsupervised way without relying on any
labeled data during test-time. While traditional test-time adaptation (TTA) can
be used in this scenario, most of them inherently assume training data come
from a single domain, while they come from multiple clients (source domains)
with different distributions. Overlooking these domain interrelationships can
result in suboptimal generalization. Moreover, most TTA algorithms are designed
for a specific kind of distribution shift and lack the flexibility to handle
multiple kinds of distribution shifts in FL. In this paper, we find that this
lack of flexibility partially results from their pre-defining which modules to
adapt in the model. To tackle this challenge, we propose a novel algorithm
called ATP to adaptively learns the adaptation rates for each module in the
model from distribution shifts among source domains. Theoretical analysis
proves the strong generalization of ATP. Extensive experiments demonstrate its
superiority in handling various distribution shifts including label shift,
image corruptions, and domain shift, outperforming existing TTA methods across
multiple datasets and model architectures. Our code is available at
https://github.com/baowenxuan/ATP .
Related papers
- Test-Time Alignment via Hypothesis Reweighting [56.71167047381817]
Large pretrained models often struggle with underspecified tasks.
We propose a novel framework to address the challenge of aligning models to test-time user intent.
arXiv Detail & Related papers (2024-12-11T23:02:26Z) - NuwaTS: a Foundation Model Mending Every Incomplete Time Series [24.768755438620666]
We present textbfNuwaTS, a novel framework that repurposes Pre-trained Language Models for general time series imputation.
NuwaTS can be applied to impute missing data across any domain.
We show that NuwaTS generalizes to other time series tasks, such as forecasting.
arXiv Detail & Related papers (2024-05-24T07:59:02Z) - Dual-Personalizing Adapter for Federated Foundation Models [35.863585349109385]
We propose a Federated Dual-Personalizing Adapter architecture to handle test-time distribution shifts simultaneously.
The effectiveness of the proposed method has been evaluated on benchmark datasets across different NLP tasks.
arXiv Detail & Related papers (2024-03-28T08:19:33Z) - COMET: Contrastive Mean Teacher for Online Source-Free Universal Domain Adaptation [3.5139431332194198]
In real-world applications, there is often a domain shift from training to test data.
We introduce a Contrastive Mean Teacher (COMET) tailored to this novel scenario.
COMET yields state-of-the-art performance and proves to be consistent and robust across a variety of different scenarios.
arXiv Detail & Related papers (2024-01-31T10:47:25Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [117.72709110877939]
Test-time adaptation (TTA) has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
We categorize TTA into several distinct groups based on the form of test data, namely, test-time domain adaptation, test-time batch adaptation, and online test-time adaptation.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning [44.604485649167216]
Federated learning (FL) enables multiple clients to collaboratively train a global model without disclosing their data.
We propose a federated adaptive prompt tuning algorithm, FedAPT, for multi-domain collaborative image classification.
arXiv Detail & Related papers (2022-11-15T03:10:05Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Test-Time Robust Personalization for Federated Learning [5.553167334488855]
Federated Learning (FL) is a machine learning paradigm where many clients collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.
We propose Federated Test-time Head Ensemble plus tuning(FedTHE+), which personalizes FL models with robustness to various test-time distribution shifts.
arXiv Detail & Related papers (2022-05-22T20:08:14Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.