Partially Personalized Federated Learning: Breaking the Curse of Data
Heterogeneity
- URL: http://arxiv.org/abs/2305.18285v1
- Date: Mon, 29 May 2023 17:54:50 GMT
- Title: Partially Personalized Federated Learning: Breaking the Curse of Data
Heterogeneity
- Authors: Konstantin Mishchenko, Rustem Islamov, Eduard Gorbunov, Samuel
Horv\'ath
- Abstract summary: We present a partially personalized formulation of Federated Learning (FL) that strikes a balance between the flexibility of personalization and cooperativeness of global training.
In our framework, we split the variables into global parameters, which are shared across all clients, and individual local parameters, which are kept private.
We prove that under the right split of parameters, it is possible to find global parameters that allow each client to fit their data perfectly, and refer to the obtained problem as overpersonalized.
- Score: 8.08257664697228
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a partially personalized formulation of Federated Learning (FL)
that strikes a balance between the flexibility of personalization and
cooperativeness of global training. In our framework, we split the variables
into global parameters, which are shared across all clients, and individual
local parameters, which are kept private. We prove that under the right split
of parameters, it is possible to find global parameters that allow each client
to fit their data perfectly, and refer to the obtained problem as
overpersonalized. For instance, the shared global parameters can be used to
learn good data representations, whereas the personalized layers are fine-tuned
for a specific client. Moreover, we present a simple algorithm for the
partially personalized formulation that offers significant benefits to all
clients. In particular, it breaks the curse of data heterogeneity in several
settings, such as training with local steps, asynchronous training, and
Byzantine-robust training.
Related papers
- Fine-Tuning Personalization in Federated Learning to Mitigate Adversarial Clients [8.773068878015856]
Federated learning (FL) is an appealing paradigm that allows a group of machines (a.k.a. clients) to learn collectively while keeping their data local.
We consider an FL setting where some clients can be adversarial, and we derive conditions under which full collaboration fails.
arXiv Detail & Related papers (2024-09-30T14:31:19Z) - Decoupling General and Personalized Knowledge in Federated Learning via Additive and Low-Rank Decomposition [26.218506124446826]
Key strategy of Personalized Federated Learning is to decouple general knowledge (shared among clients) and client-specific knowledge.
We introduce FedDecomp, a simple but effective PFL paradigm that employs parameter decomposition additive to address this issue.
Experimental results across multiple datasets and varying degrees of data demonstrate that FedDecomp outperforms state-of-the-art methods up to 4.9%.
arXiv Detail & Related papers (2024-06-28T14:01:22Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Personalized Federated Learning via Amortized Bayesian Meta-Learning [21.126405589760367]
We introduce a new perspective on personalized federated learning through Amortized Bayesian Meta-Learning.
Specifically, we propose a novel algorithm called emphFedABML, which employs hierarchical variational inference across clients.
Our theoretical analysis provides an upper bound on the average generalization error and guarantees the generalization performance on unseen data.
arXiv Detail & Related papers (2023-07-05T11:58:58Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Collaborative Chinese Text Recognition with Personalized Federated
Learning [61.34060587461462]
In Chinese text recognition, it is often necessary for one organization to collect a large amount of data from similar organizations.
Due to the natural presence of private information in text data, such as addresses and phone numbers, different organizations are unwilling to share private data.
We introduce personalized federated learning (pFL) into the Chinese text recognition task and propose the pFedCR algorithm.
arXiv Detail & Related papers (2023-05-09T16:51:00Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Subspace Learning for Personalized Federated Optimization [7.475183117508927]
We propose a method to address the problem of personalized learning in AI systems.
We show that our method achieves consistent gains both in personalized and unseen client evaluation settings.
arXiv Detail & Related papers (2021-09-16T00:03:23Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.