FedGEMS: Federated Learning of Larger Server Models via Selective
Knowledge Fusion
- URL: http://arxiv.org/abs/2110.11027v1
- Date: Thu, 21 Oct 2021 10:06:44 GMT
- Title: FedGEMS: Federated Learning of Larger Server Models via Selective
Knowledge Fusion
- Authors: Sijie Cheng, Jingwen Wu, Yanghua Xiao, Yang Liu and Yang Liu
- Abstract summary: Federated Learning (FL) has emerged as a viable solution to learn a global model while keeping data private.
In this work, we investigate a novel paradigm to take advantage of a powerful server model to break through model capacity in FL.
- Score: 19.86388925556209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Today data is often scattered among billions of resource-constrained edge
devices with security and privacy constraints. Federated Learning (FL) has
emerged as a viable solution to learn a global model while keeping data
private, but the model complexity of FL is impeded by the computation resources
of edge nodes. In this work, we investigate a novel paradigm to take advantage
of a powerful server model to break through model capacity in FL. By
selectively learning from multiple teacher clients and itself, a server model
develops in-depth knowledge and transfers its knowledge back to clients in
return to boost their respective performance. Our proposed framework achieves
superior performance on both server and client models and provides several
advantages in a unified framework, including flexibility for heterogeneous
client architectures, robustness to poisoning attacks, and communication
efficiency between clients and server. By bridging FL effectively with larger
server model training, our proposed paradigm paves ways for robust and
continual knowledge accumulation from distributed and private data.
Related papers
- Federated Learning with Flexible Architectures [12.800116749927266]
This paper introduces Federated Learning with Flexible Architectures (FedFA), an FL training algorithm that allows clients to train models of different widths and depths.
FedFA incorporates the layer grafting technique to align clients' local architectures with the largest network architecture in the FL system during model aggregation.
arXiv Detail & Related papers (2024-06-14T09:44:46Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Training Heterogeneous Client Models using Knowledge Distillation in
Serverless Federated Learning [0.5510212613486574]
Federated Learning (FL) is an emerging machine learning paradigm that enables the collaborative training of a shared global model across distributed clients.
Recent works on designing systems for efficient FL have shown that utilizing serverless computing technologies can enhance resource efficiency, reduce training costs, and alleviate the complex infrastructure management burden on data holders.
arXiv Detail & Related papers (2024-02-11T20:15:52Z) - CLIP-guided Federated Learning on Heterogeneous and Long-Tailed Data [25.56641696086199]
Federated learning (FL) provides a decentralized machine learning paradigm where a server collaborates with a group of clients to learn a global model without accessing the clients' data.
We propose the CLIP-guided FL (CLIP2FL) method on heterogeneous and long-tailed data.
arXiv Detail & Related papers (2023-12-14T04:07:49Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Heterogeneous Ensemble Knowledge Transfer for Training Large Models in
Federated Learning [22.310090483499035]
Federated learning (FL) enables edge-devices to collaboratively learn a model without disclosing their private data to a central aggregating server.
Most existing FL algorithms require models of identical architecture to be deployed across the clients and server.
We propose a novel ensemble knowledge transfer method named Fed-ET in which small models are trained on clients, and used to train a larger model at the server.
arXiv Detail & Related papers (2022-04-27T05:18:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Personalized Retrogress-Resilient Framework for Real-World Medical
Federated Learning [8.240098954377794]
We propose a personalized retrogress-resilient framework to produce a superior personalized model for each client.
Our experiments on real-world dermoscopic FL dataset prove that our personalized retrogress-resilient framework outperforms state-of-the-art FL methods.
arXiv Detail & Related papers (2021-10-01T13:24:29Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.