$r$Age-$k$: Communication-Efficient Federated Learning Using Age Factor
- URL: http://arxiv.org/abs/2410.22192v1
- Date: Tue, 29 Oct 2024 16:30:34 GMT
- Title: $r$Age-$k$: Communication-Efficient Federated Learning Using Age Factor
- Authors: Matin Mortaheb, Priyanka Kaswan, Sennur Ulukus,
- Abstract summary: Federated learning (FL) is a collaborative approach where multiple clients, coordinated by a parameter server (PS), train a unified machine-learning model.
This paper introduces a new communication-efficient algorithm that uses the age of information metric to tackle both limitations of FL.
- Score: 31.285983939625098
- License:
- Abstract: Federated learning (FL) is a collaborative approach where multiple clients, coordinated by a parameter server (PS), train a unified machine-learning model. The approach, however, suffers from two key challenges: data heterogeneity and communication overhead. Data heterogeneity refers to inconsistencies in model training arising from heterogeneous data at different clients. Communication overhead arises from the large volumes of parameter updates exchanged between the PS and clients. Existing solutions typically address these challenges separately. This paper introduces a new communication-efficient algorithm that uses the age of information metric to simultaneously tackle both limitations of FL. We introduce age vectors at the PS, which keep track of how often the different model parameters are updated from the clients. The PS uses this to selectively request updates for specific gradient indices from each client. Further, the PS employs age vectors to identify clients with statistically similar data and group them into clusters. The PS combines the age vectors of the clustered clients to efficiently coordinate gradient index updates among clients within a cluster. We evaluate our approach using the MNIST and CIFAR10 datasets in highly non-i.i.d. settings. The experimental results show that our proposed method can expedite training, surpassing other communication-efficient strategies in efficiency.
Related papers
- A Bayesian Framework for Clustered Federated Learning [14.426129993432193]
One of the main challenges of federated learning (FL) is handling non-independent and identically distributed (non-IID) client data.
We present a unified Bayesian framework for clustered FL which associates clients to clusters.
This work provides insights on client-cluster associations and enables client knowledge sharing in new ways.
arXiv Detail & Related papers (2024-10-20T19:11:24Z) - Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning [14.866327821524854]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated learning with incremental clustering for heterogeneous data [0.0]
In previous approaches, in order to cluster clients the server requires clients to send their parameters simultaneously.
We propose FLIC (Federated Learning with Incremental Clustering) in which the server exploits the updates sent by clients during federated training instead of asking them to send their parameters simultaneously.
We empirically demonstrate for various non-IID cases that our approach successfully splits clients into groups following the same data distributions.
arXiv Detail & Related papers (2022-06-17T13:13:03Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Federated Learning with Taskonomy for Non-IID Data [0.0]
We introduce federated learning with taskonomy.
In a one-off process, the server provides the clients with a pretrained (and fine-tunable) encoder to compress their data into a latent representation, and transmit the signature of their data back to the server.
The server then learns the task-relatedness among clients via manifold learning, and performs a generalization of federated averaging.
arXiv Detail & Related papers (2021-03-29T20:47:45Z) - Timely Communication in Federated Learning [65.1253801733098]
We consider a global learning framework in which a parameter server (PS) trains a global model by using $n$ clients without actually storing the client data centrally at a cloud server.
Under the proposed scheme, at each iteration, the PS waits for $m$ available clients and sends them the current model.
We find the average age of information experienced by each client and numerically characterize the age-optimal $m$ and $k$ values for a given $n$.
arXiv Detail & Related papers (2020-12-31T18:52:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.