Adaptive Test-Time Personalization for Federated Learning
- URL: http://arxiv.org/abs/2310.18816v1
- Date: Sat, 28 Oct 2023 20:42:47 GMT
- Title: Adaptive Test-Time Personalization for Federated Learning
- Authors: Wenxuan Bao, Tianxin Wei, Haohan Wang, Jingrui He
- Abstract summary: We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
- Score: 51.25437606915392
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Personalized federated learning algorithms have shown promising results in
adapting models to various distribution shifts. However, most of these methods
require labeled data on testing clients for personalization, which is usually
unavailable in real-world scenarios. In this paper, we introduce a novel
setting called test-time personalized federated learning (TTPFL), where clients
locally adapt a global model in an unsupervised way without relying on any
labeled data during test-time. While traditional test-time adaptation (TTA) can
be used in this scenario, most of them inherently assume training data come
from a single domain, while they come from multiple clients (source domains)
with different distributions. Overlooking these domain interrelationships can
result in suboptimal generalization. Moreover, most TTA algorithms are designed
for a specific kind of distribution shift and lack the flexibility to handle
multiple kinds of distribution shifts in FL. In this paper, we find that this
lack of flexibility partially results from their pre-defining which modules to
adapt in the model. To tackle this challenge, we propose a novel algorithm
called ATP to adaptively learns the adaptation rates for each module in the
model from distribution shifts among source domains. Theoretical analysis
proves the strong generalization of ATP. Extensive experiments demonstrate its
superiority in handling various distribution shifts including label shift,
image corruptions, and domain shift, outperforming existing TTA methods across
multiple datasets and model architectures. Our code is available at
https://github.com/baowenxuan/ATP .
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - NuwaTS: a Foundation Model Mending Every Incomplete Time Series [24.768755438620666]
We present textbfNuwaTS, a novel framework that repurposes Pre-trained Language Models for general time series imputation.
NuwaTS can be applied to impute missing data across any domain.
We show that NuwaTS generalizes to other time series tasks, such as forecasting.
arXiv Detail & Related papers (2024-05-24T07:59:02Z) - COMET: Contrastive Mean Teacher for Online Source-Free Universal Domain Adaptation [3.5139431332194198]
In real-world applications, there is often a domain shift from training to test data.
We introduce a Contrastive Mean Teacher (COMET) tailored to this novel scenario.
COMET yields state-of-the-art performance and proves to be consistent and robust across a variety of different scenarios.
arXiv Detail & Related papers (2024-01-31T10:47:25Z) - Consistency Regularization for Generalizable Source-free Domain
Adaptation [62.654883736925456]
Source-free domain adaptation (SFDA) aims to adapt a well-trained source model to an unlabelled target domain without accessing the source dataset.
Existing SFDA methods ONLY assess their adapted models on the target training set, neglecting the data from unseen but identically distributed testing sets.
We propose a consistency regularization framework to develop a more generalizable SFDA method.
arXiv Detail & Related papers (2023-08-03T07:45:53Z) - A Comprehensive Survey on Test-Time Adaptation under Distribution Shifts [143.14128737978342]
Test-time adaptation, an emerging paradigm, has the potential to adapt a pre-trained model to unlabeled data during testing, before making predictions.
Recent progress in this paradigm highlights the significant benefits of utilizing unlabeled data for training self-adapted models prior to inference.
arXiv Detail & Related papers (2023-03-27T16:32:21Z) - Federated Adaptive Prompt Tuning for Multi-Domain Collaborative Learning [44.604485649167216]
Federated learning (FL) enables multiple clients to collaboratively train a global model without disclosing their data.
We propose a federated adaptive prompt tuning algorithm, FedAPT, for multi-domain collaborative image classification.
arXiv Detail & Related papers (2022-11-15T03:10:05Z) - CAFA: Class-Aware Feature Alignment for Test-Time Adaptation [50.26963784271912]
Test-time adaptation (TTA) aims to address this challenge by adapting a model to unlabeled data at test time.
We propose a simple yet effective feature alignment loss, termed as Class-Aware Feature Alignment (CAFA), which simultaneously encourages a model to learn target representations in a class-discriminative manner.
arXiv Detail & Related papers (2022-06-01T03:02:07Z) - Test-Time Robust Personalization for Federated Learning [5.553167334488855]
Federated Learning (FL) is a machine learning paradigm where many clients collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.
We propose Federated Test-time Head Ensemble plus tuning(FedTHE+), which personalizes FL models with robustness to various test-time distribution shifts.
arXiv Detail & Related papers (2022-05-22T20:08:14Z) - Learning to Generalize across Domains on Single Test Samples [126.9447368941314]
We learn to generalize across domains on single test samples.
We formulate the adaptation to the single test sample as a variational Bayesian inference problem.
Our model achieves at least comparable -- and often better -- performance than state-of-the-art methods on multiple benchmarks for domain generalization.
arXiv Detail & Related papers (2022-02-16T13:21:04Z) - Adaptive Risk Minimization: Learning to Adapt to Domain Shift [109.87561509436016]
A fundamental assumption of most machine learning algorithms is that the training and test data are drawn from the same underlying distribution.
In this work, we consider the problem setting of domain generalization, where the training data are structured into domains and there may be multiple test time shifts.
We introduce the framework of adaptive risk minimization (ARM), in which models are directly optimized for effective adaptation to shift by learning to adapt on the training domains.
arXiv Detail & Related papers (2020-07-06T17:59:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.