Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis
- URL: http://arxiv.org/abs/2410.23131v2
- Date: Thu, 31 Oct 2024 20:52:35 GMT
- Title: Federated Learning under Periodic Client Participation and Heterogeneous Data: A New Communication-Efficient Algorithm and Analysis
- Authors: Michael Crawshaw, Mingrui Liu,
- Abstract summary: In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice.
Recent analyze federated learning under more realistic participation patterns as cyclic client availability or arbitrary participation.
- Score: 14.98493572536424
- License:
- Abstract: In federated learning, it is common to assume that clients are always available to participate in training, which may not be feasible with user devices in practice. Recent works analyze federated learning under more realistic participation patterns, such as cyclic client availability or arbitrary participation. However, all such works either require strong assumptions (e.g., all clients participate almost surely within a bounded window), do not achieve linear speedup and reduced communication rounds, or are not applicable in the general non-convex setting. In this work, we focus on nonconvex optimization and consider participation patterns in which the chance of participation over a fixed window of rounds is equal among all clients, which includes cyclic client availability as a special case. Under this setting, we propose a new algorithm, named Amplified SCAFFOLD, and prove that it achieves linear speedup, reduced communication, and resilience to data heterogeneity simultaneously. In particular, for cyclic participation, our algorithm is proved to enjoy $\mathcal{O}(\epsilon^{-2})$ communication rounds to find an $\epsilon$-stationary point in the non-convex stochastic setting. In contrast, the prior work under the same setting requires $\mathcal{O}(\kappa^2 \epsilon^{-4})$ communication rounds, where $\kappa$ denotes the data heterogeneity. Therefore, our algorithm significantly reduces communication rounds due to better dependency in terms of $\epsilon$ and $\kappa$. Our analysis relies on a fine-grained treatment of the nested dependence between client participation and errors in the control variates, which results in tighter guarantees than previous work. We also provide experimental results with (1) synthetic data and (2) real-world data with a large number of clients $(N = 250)$, demonstrating the effectiveness of our algorithm under periodic client participation.
Related papers
- Debiasing Federated Learning with Correlated Client Participation [25.521881752822164]
This paper introduces a theoretical framework that models client participation in FL as a Markov chain.
Every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating.
We develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution.
arXiv Detail & Related papers (2024-10-02T03:30:53Z) - On the Necessity of Collaboration for Online Model Selection with Decentralized Data [53.244188985271606]
We consider online model selection with decentralized data over $M$ clients, and study the necessity of collaboration among clients.
Our results show (i) collaboration is unnecessary in the absence of computational constraints on clients; (ii) collaboration is necessary if the computational cost on each client is limited to $o(K)$, where $K$ is the number of candidate hypothesis spaces.
arXiv Detail & Related papers (2024-04-15T06:32:28Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Achieving Linear Speedup in Non-IID Federated Bilevel Learning [16.56643290676128]
We propose a new federated bilevel algorithm named FedMBO.
We show that FedMBO achieves a convergence rate of $mathcalObig(frac1sqrtnK+frac1K+fracsqrtnK3/2big)$ on non-i.i.d.datasets.
This is the first theoretical linear speedup result for non-i.i.d.federated bilevel optimization.
arXiv Detail & Related papers (2023-02-10T18:28:00Z) - Federated Learning with Regularized Client Participation [1.433758865948252]
Federated Learning (FL) is a distributed machine learning approach where multiple clients work together to solve a machine learning task.
One of the key challenges in FL is the issue of partial participation, which occurs when a large number of clients are involved in the training process.
We propose a new technique and design a novel regularized client participation scheme.
arXiv Detail & Related papers (2023-02-07T18:26:07Z) - Anchor Sampling for Federated Learning with Partial Client Participation [17.8094483221845]
We propose to develop a novel federated learning, referred to as FedAMD, for partial client participation.
The core idea is anchor sampling, which separates partial participants into anchor and miner groups.
By integrating the results of two groups, FedAMD is able to accelerate the training process and improve the model performance.
arXiv Detail & Related papers (2022-06-13T03:08:39Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Faster Rates for Compressed Federated Learning with Client-Variance
Reduction [23.169998435268504]
We show that COFIG and FRECON converge within $O(frac(1+omega)sqrtNSepsilon2)$ communication rounds.
In the convex setting COFIG converges within $O(frac(1+omega)sqrtNSepsilon2)$ communication rounds, which, to the best of our knowledge, is the first result.
arXiv Detail & Related papers (2021-12-24T16:28:18Z) - Linear Speedup in Personalized Collaborative Learning [69.45124829480106]
Personalization in federated learning can improve the accuracy of a model for a user by trading off the model's bias.
We formalize the personalized collaborative learning problem as optimization of a user's objective.
We explore conditions under which we can optimally trade-off their bias for a reduction in variance.
arXiv Detail & Related papers (2021-11-10T22:12:52Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.