Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives
- URL: http://arxiv.org/abs/2208.06192v1
- Date: Fri, 12 Aug 2022 09:51:20 GMT
- Title: Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives
- Authors: Zichen Ma, Yu Lu, Wenye Li, Shuguang Cui
- Abstract summary: We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
- Score: 37.42347737911428
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized federated learning (FL) facilitates collaborations between
multiple clients to learn personalized models without sharing private data. The
mechanism mitigates the statistical heterogeneity commonly encountered in the
system, i.e., non-IID data over different clients. Existing personalized
algorithms generally assume all clients volunteer for personalization. However,
potential participants might still be reluctant to personalize models since
they might not work well. In this case, clients choose to use the global model
instead. To avoid making unrealistic assumptions, we introduce the
personalization rate, measured as the fraction of clients willing to train
personalized models, into federated settings and propose DyPFL. This
dynamically personalized FL technique incentivizes clients to participate in
personalizing local models while allowing the adoption of the global model when
it performs better. We show that the algorithmic pipeline in DyPFL guarantees
good convergence performance, allowing it to outperform alternative
personalized methods in a broad range of conditions, including variation in
heterogeneity, number of clients, local epochs, and batch sizes.
Related papers
- MAP: Model Aggregation and Personalization in Federated Learning with Incomplete Classes [49.22075916259368]
In some real-world applications, data samples are usually distributed on local devices.
In this paper, we focus on a special kind of Non-I.I.D. scene where clients own incomplete classes.
Our proposed algorithm named MAP could simultaneously achieve the aggregation and personalization goals in FL.
arXiv Detail & Related papers (2024-04-14T12:22:42Z) - Efficient Model Personalization in Federated Learning via
Client-Specific Prompt Generation [38.42808389088285]
Federated learning (FL) emerges as a decentralized learning framework which trains models from multiple distributed clients without sharing their data to preserve privacy.
We propose a novel personalized FL framework of client-specific Prompt Generation (pFedPG)
pFedPG learns to deploy a personalized prompt generator at the server for producing client-specific visual prompts that efficiently adapts frozen backbones to local data distributions.
arXiv Detail & Related papers (2023-08-29T15:03:05Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - FedDWA: Personalized Federated Learning with Dynamic Weight Adjustment [20.72576355616359]
We propose a new PFL algorithm called emphFedDWA (Federated Learning with Dynamic Weight Adjustment) to address the problem.
FedDWA computes personalized aggregation weights based on collected models from clients.
We conduct extensive experiments using five real datasets and the results demonstrate that FedDWA can significantly reduce the communication traffic and achieve much higher model accuracy than the state-of-the-art approaches.
arXiv Detail & Related papers (2023-05-10T13:12:07Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Visual Prompt Based Personalized Federated Learning [83.04104655903846]
We propose a novel PFL framework for image classification tasks, dubbed pFedPT, that leverages personalized visual prompts to implicitly represent local data distribution information of clients.
Experiments on the CIFAR10 and CIFAR100 datasets show that pFedPT outperforms several state-of-the-art (SOTA) PFL algorithms by a large margin in various settings.
arXiv Detail & Related papers (2023-03-15T15:02:15Z) - Self-Aware Personalized Federated Learning [32.97492968378679]
We develop a self-aware personalized federated learning (FL) method inspired by Bayesian hierarchical models.
Our method uses uncertainty-driven local training steps and aggregation rule instead of conventional local fine-tuning and sample size-based aggregation.
With experimental studies on synthetic data, Amazon Alexa audio data, and public datasets such as MNIST, FEMNIST, CIFAR10, and Sent140, we show that our proposed method can achieve significantly improved personalization performance.
arXiv Detail & Related papers (2022-04-17T19:02:25Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.