Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training
- URL: http://arxiv.org/abs/2406.15125v1
- Date: Fri, 21 Jun 2024 13:19:29 GMT
- Title: Embracing Federated Learning: Enabling Weak Client Participation via Partial Model Training
- Authors: Sunwoo Lee, Tuo Zhang, Saurav Prakash, Yue Niu, Salman Avestimehr,
- Abstract summary: In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space.
We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training.
Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods.
- Score: 21.89214794178211
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Federated Learning (FL), clients may have weak devices that cannot train the full model or even hold it in their memory space. To implement large-scale FL applications, thus, it is crucial to develop a distributed learning method that enables the participation of such weak clients. We propose EmbracingFL, a general FL framework that allows all available clients to join the distributed training regardless of their system resource capacity. The framework is built upon a novel form of partial model training method in which each client trains as many consecutive output-side layers as its system resources allow. Our study demonstrates that EmbracingFL encourages each layer to have similar data representations across clients, improving FL efficiency. The proposed partial model training method guarantees convergence to a neighbor of stationary points for non-convex and smooth problems. We evaluate the efficacy of EmbracingFL under a variety of settings with a mixed number of strong, moderate (~40% memory), and weak (~15% memory) clients, datasets (CIFAR-10, FEMNIST, and IMDB), and models (ResNet20, CNN, and LSTM). Our empirical study shows that EmbracingFL consistently achieves high accuracy as like all clients are strong, outperforming the state-of-the-art width reduction methods (i.e. HeteroFL and FjORD).
Related papers
- Multi-level Personalized Federated Learning on Heterogeneous and Long-Tailed Data [10.64629029156029]
We introduce an innovative personalized Federated Learning framework, Multi-level Personalized Federated Learning (MuPFL)
MuPFL integrates three pivotal modules: Biased Activation Value Dropout (BAVD), Adaptive Cluster-based Model Update (ACMU) and Prior Knowledge-assisted Fine-tuning (PKCF)
Experiments on diverse real-world datasets show that MuPFL consistently outperforms state-of-the-art baselines, even under extreme non-i.i.d. and long-tail conditions.
arXiv Detail & Related papers (2024-05-10T11:52:53Z) - An Element-Wise Weights Aggregation Method for Federated Learning [11.9232569348563]
This paper introduces an innovative Element-Wise Weights Aggregation Method for Federated Learning (EWWA-FL)
EWWA-FL aggregates local weights to the global model at the level of individual elements, allowing each participating client to make element-wise contributions to the learning process.
By taking into account the unique dataset characteristics of each client, EWWA-FL enhances the robustness of the global model to different datasets.
arXiv Detail & Related papers (2024-04-24T15:16:06Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - A Fair Federated Learning Framework With Reinforcement Learning [23.675056844328]
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
arXiv Detail & Related papers (2022-05-26T15:10:16Z) - Efficient Split-Mix Federated Learning for On-Demand and In-Situ
Customization [107.72786199113183]
Federated learning (FL) provides a distributed learning framework for multiple participants to collaborate learning without sharing raw data.
In this paper, we propose a novel Split-Mix FL strategy for heterogeneous participants that, once training is done, provides in-situ customization of model sizes and robustness.
arXiv Detail & Related papers (2022-03-18T04:58:34Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - A Multi-agent Reinforcement Learning Approach for Efficient Client
Selection in Federated Learning [17.55163940659976]
Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model.
We design an efficient FL framework which jointly optimize model accuracy, processing latency and communication efficiency.
Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.
arXiv Detail & Related papers (2022-01-09T05:55:17Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.