No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices
- URL: http://arxiv.org/abs/2202.08036v1
- Date: Wed, 16 Feb 2022 13:03:27 GMT
- Title: No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices
- Authors: Ruixuan Liu, Fangzhao Wu, Chuhan Wu, Yanlin Wang, Lingjuan Lyu, Hong
Chen, Xing Xie
- Abstract summary: We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
- Score: 79.16481453598266
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is an important paradigm for training global models
from decentralized data in a privacy-preserving way. Existing FL methods
usually assume the global model can be trained on any participating client.
However, in real applications, the devices of clients are usually
heterogeneous, and have different computing power. Although big models like
BERT have achieved huge success in AI, it is difficult to apply them to
heterogeneous FL with weak clients. The straightforward solutions like removing
the weak clients or using a small model to fit all clients will lead to some
problems, such as under-representation of dropped clients and inferior accuracy
due to data loss or limited model representation ability. In this work, we
propose InclusiveFL, a client-inclusive federated learning method to handle
this problem. The core idea of InclusiveFL is to assign models of different
sizes to clients with different computing capabilities, bigger models for
powerful clients and smaller ones for weak clients. We also propose an
effective method to share the knowledge among multiple local models with
different sizes. In this way, all the clients can participate in the model
learning in FL, and the final model can be big and powerful enough. Besides, we
propose a momentum knowledge distillation method to better transfer knowledge
in big models on powerful clients to the small models on weak clients.
Extensive experiments on many real-world benchmark datasets demonstrate the
effectiveness of the proposed method in learning accurate models from clients
with heterogeneous devices under the FL framework.
Related papers
- Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training [21.89214794178211]
In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
arXiv Detail & Related papers (2024-06-21T13:19:29Z) - Client-supervised Federated Learning: Towards One-model-for-all Personalization [28.574858341430858]
We propose a novel federated learning framework to learn only one robust global model to achieve competitive performance to those personalized models on unseen/test clients in the FL system.
Specifically, we design a new Client-Supervised Federated Learning (FedCS) to unravel clients' bias on instances' latent representations so that the global model can learn both client-specific and client-agnostic knowledge.
arXiv Detail & Related papers (2024-03-28T15:29:19Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - FedYolo: Augmenting Federated Learning with Pretrained Transformers [61.56476056444933]
In this work, we investigate pretrained transformers (PTF) to achieve on-device learning goals.
We show that larger scale shrinks the accuracy gaps between alternative approaches and improves robustness.
Finally, it enables clients to solve multiple unrelated tasks simultaneously using a single PTF.
arXiv Detail & Related papers (2023-07-10T21:08:52Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Closing the Gap between Client and Global Model Performance in
Heterogeneous Federated Learning [2.1044900734651626]
We show how the chosen approach for training custom client models has an impact on the global model.
We propose a new approach that combines KD and Learning without Forgetting (LwoF) to produce improved personalised models.
arXiv Detail & Related papers (2022-11-07T11:12:57Z) - Federated Learning from Pre-Trained Models: A Contrastive Learning
Approach [43.893267526525904]
Federated Learning (FL) is a machine learning paradigm that allows decentralized clients to learn collaboratively without sharing their private data.
Excessive computation and communication demands pose challenges to current FL frameworks.
We propose a lightweight framework where clients jointly learn to fuse the representations generated by multiple fixed pre-trained models.
arXiv Detail & Related papers (2022-09-21T03:16:57Z) - A Bayesian Federated Learning Framework with Online Laplace
Approximation [144.7345013348257]
Federated learning allows multiple clients to collaboratively learn a globally shared model.
We propose a novel FL framework that uses online Laplace approximation to approximate posteriors on both the client and server side.
We achieve state-of-the-art results on several benchmarks, clearly demonstrating the advantages of the proposed method.
arXiv Detail & Related papers (2021-02-03T08:36:58Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z) - Ensemble Distillation for Robust Model Fusion in Federated Learning [72.61259487233214]
Federated Learning (FL) is a machine learning setting where many devices collaboratively train a machine learning model.
In most of the current training schemes the central model is refined by averaging the parameters of the server model and the updated parameters from the client side.
We propose ensemble distillation for model fusion, i.e. training the central classifier through unlabeled data on the outputs of the models from the clients.
arXiv Detail & Related papers (2020-06-12T14:49:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.