Test-Time Robust Personalization for Federated Learning
- URL: http://arxiv.org/abs/2205.10920v4
- Date: Fri, 23 Jun 2023 14:50:16 GMT
- Title: Test-Time Robust Personalization for Federated Learning
- Authors: Liangze Jiang, Tao Lin
- Abstract summary: Federated Learning (FL) is a machine learning paradigm where many clients collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients, achieving promising results on consistent local training and test distributions.
We propose Federated Test-time Head Ensemble plus tuning(FedTHE+), which personalizes FL models with robustness to various test-time distribution shifts.
- Score: 5.553167334488855
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a machine learning paradigm where many clients
collaboratively learn a shared global model with decentralized training data.
Personalized FL additionally adapts the global model to different clients,
achieving promising results on consistent local training and test
distributions. However, for real-world personalized FL applications, it is
crucial to go one step further: robustifying FL models under the evolving local
test set during deployment, where various distribution shifts can arise. In
this work, we identify the pitfalls of existing works under test-time
distribution shifts and propose Federated Test-time Head Ensemble plus
tuning(FedTHE+), which personalizes FL models with robustness to various
test-time distribution shifts. We illustrate the advancement of FedTHE+ (and
its computationally efficient variant FedTHE) over strong competitors, by
training various neural architectures (CNN, ResNet, and Transformer) on CIFAR10
andImageNet with various test distributions. Along with this, we build a
benchmark for assessing the performance and robustness of personalized FL
methods during deployment. Code: https://github.com/LINs-lab/FedTHE.
Related papers
- Personalized Federated Learning via Feature Distribution Adaptation [3.410799378893257]
Federated learning (FL) is a distributed learning framework that leverages commonalities between distributed client datasets to train a global model.
personalized federated learning (PFL) seeks to address this by learning individual models tailored to each client.
We propose an algorithm, pFedFDA, that efficiently generates personalized models by adapting global generative classifiers to their local feature distributions.
arXiv Detail & Related papers (2024-11-01T03:03:52Z) - Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - Adaptive Test-Time Personalization for Federated Learning [51.25437606915392]
We introduce a novel setting called test-time personalized federated learning (TTPFL)
In TTPFL, clients locally adapt a global model in an unsupervised way without relying on any labeled data during test-time.
We propose a novel algorithm called ATP to adaptively learn the adaptation rates for each module in the model from distribution shifts among source domains.
arXiv Detail & Related papers (2023-10-28T20:42:47Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Integrating Local Real Data with Global Gradient Prototypes for
Classifier Re-Balancing in Federated Long-Tailed Learning [60.41501515192088]
Federated Learning (FL) has become a popular distributed learning paradigm that involves multiple clients training a global model collaboratively.
The data samples usually follow a long-tailed distribution in the real world, and FL on the decentralized and long-tailed data yields a poorly-behaved global model.
In this work, we integrate the local real data with the global gradient prototypes to form the local balanced datasets.
arXiv Detail & Related papers (2023-01-25T03:18:10Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Multi-Model Federated Learning with Provable Guarantees [19.470024548995717]
Federated Learning (FL) is a variant of distributed learning where devices collaborate to learn a model without sharing their data with the central server or each other.
We refer to the process of multiple independent clients simultaneously in a federated setting using a common pool of clients as a multi-model edge FL.
arXiv Detail & Related papers (2022-07-09T19:47:52Z) - FL Games: A federated learning framework for distribution shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL Games, a game-theoretic framework for federated learning for learning causal features that are invariant across clients.
arXiv Detail & Related papers (2022-05-23T07:51:45Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - Federated learning with hierarchical clustering of local updates to
improve training on non-IID data [3.3517146652431378]
We show that learning a single joint model is often not optimal in the presence of certain types of non-iid data.
We present a modification to FL by introducing a hierarchical clustering step (FL+HC)
We show how FL+HC allows model training to converge in fewer communication rounds compared to FL without clustering.
arXiv Detail & Related papers (2020-04-24T15:16:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.