Personalized Federated Learning for Heterogeneous Clients with Clustered
Knowledge Transfer
- URL: http://arxiv.org/abs/2109.08119v1
- Date: Thu, 16 Sep 2021 17:13:53 GMT
- Title: Personalized Federated Learning for Heterogeneous Clients with Clustered
Knowledge Transfer
- Authors: Yae Jee Cho, Jianyu Wang, Tarun Chiruvolu, Gauri Joshi
- Abstract summary: We propose PerFed-CKT, where clients can use heterogeneous model and do not directly communicate their model parameters.
We show that PerFed-CKT achieves high test accuracy with several orders of magnitude lower communication cost compared to the state-of-the-art personalized FL schemes.
- Score: 25.041895804188748
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Personalized federated learning (FL) aims to train model(s) that can perform
well for individual clients that are highly data and system heterogeneous. Most
work in personalized FL, however, assumes using the same model architecture at
all clients and increases the communication cost by sending/receiving models.
This may not be feasible for realistic scenarios of FL. In practice, clients
have highly heterogeneous system-capabilities and limited communication
resources. In our work, we propose a personalized FL framework, PerFed-CKT,
where clients can use heterogeneous model architectures and do not directly
communicate their model parameters. PerFed-CKT uses clustered co-distillation,
where clients use logits to transfer their knowledge to other clients that have
similar data-distributions. We theoretically show the convergence and
generalization properties of PerFed-CKT and empirically show that PerFed-CKT
achieves high test accuracy with several orders of magnitude lower
communication cost compared to the state-of-the-art personalized FL schemes.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - PFL-GAN: When Client Heterogeneity Meets Generative Models in
Personalized Federated Learning [55.930403371398114]
We propose a novel generative adversarial network (GAN) sharing and aggregation strategy for personalized learning (PFL)
PFL-GAN addresses the client heterogeneity in different scenarios. More specially, we first learn the similarity among clients and then develop an weighted collaborative data aggregation.
The empirical results through the rigorous experimentation on several well-known datasets demonstrate the effectiveness of PFL-GAN.
arXiv Detail & Related papers (2023-08-23T22:38:35Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - FedGH: Heterogeneous Federated Learning with Generalized Global Header [16.26231633749833]
Federated learning (FL) is an emerging machine learning paradigm that allows multiple parties to train a shared model.
We propose a simple but effective Federated Global prediction Header (FedGH) approach.
FedGH trains a shared generalized global prediction header with representations by heterogeneous extractors for clients' models.
arXiv Detail & Related papers (2023-03-23T09:38:52Z) - FedCliP: Federated Learning with Client Pruning [3.796320380104124]
Federated learning (FL) is a newly emerging distributed learning paradigm.
One fundamental bottleneck in FL is the heavy communication overheads between the distributed clients and the central server.
We propose FedCliP, the first communication efficient FL training framework from a macro perspective.
arXiv Detail & Related papers (2023-01-17T09:15:37Z) - PGFed: Personalize Each Client's Global Objective for Federated Learning [7.810284483002312]
We propose a novel personalized FL framework that enables each client to personalize its own global objective.
To avoid massive (O(N2)) communication overhead and potential privacy leakage, each client's risk is estimated through a first-order approximation for other clients' adaptive risk aggregation.
Our experiments on four datasets under different federated settings show consistent improvements of PGFed over previous state-of-the-art methods.
arXiv Detail & Related papers (2022-12-02T21:16:39Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.