On the Convergence of Federated Averaging with Cyclic Client
Participation
- URL: http://arxiv.org/abs/2302.03109v1
- Date: Mon, 6 Feb 2023 20:18:19 GMT
- Title: On the Convergence of Federated Averaging with Cyclic Client
Participation
- Authors: Yae Jee Cho, Pranay Sharma, Gauri Joshi, Zheng Xu, Satyen Kale, Tong
Zhang
- Abstract summary: Averaging (FedAvg) and its variants are the most popular optimization algorithms in federated learning (FL)
Previous convergence analyses of FedAvg assume full client participation or partial client participation where the clients can be uniformly sampled.
In practical cross-device FL systems, only a subset of clients satisfy local criteria such as battery status, network connectivity, and maximum participation frequency requirements (to ensure privacy) are available for training at a given time.
- Score: 27.870720693512045
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Averaging (FedAvg) and its variants are the most popular
optimization algorithms in federated learning (FL). Previous convergence
analyses of FedAvg either assume full client participation or partial client
participation where the clients can be uniformly sampled. However, in practical
cross-device FL systems, only a subset of clients that satisfy local criteria
such as battery status, network connectivity, and maximum participation
frequency requirements (to ensure privacy) are available for training at a
given time. As a result, client availability follows a natural cyclic pattern.
We provide (to our knowledge) the first theoretical framework to analyze the
convergence of FedAvg with cyclic client participation with several different
client optimizers such as GD, SGD, and shuffled SGD. Our analysis discovers
that cyclic client participation can achieve a faster asymptotic convergence
rate than vanilla FedAvg with uniform client participation under suitable
conditions, providing valuable insights into the design of client sampling
protocols.
Related papers
- Debiasing Federated Learning with Correlated Client Participation [25.521881752822164]
This paper introduces a theoretical framework that models client participation in FL as a Markov chain.
Every client must wait a minimum number of $R$ rounds (minimum separation) before re-participating.
We develop an effective debiasing algorithm for FedAvg that provably converges to the unbiased optimal solution.
arXiv Detail & Related papers (2024-10-02T03:30:53Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - A Unified Analysis of Federated Learning with Arbitrary Client
Participation [33.86606068136201]
Federated learning (FL) faces challenges of intermittent client availability and computation/communication efficiency.
It is important to understand how partial client participation affects convergence.
We provide a unified convergence analysis for FL with arbitrary client participation.
arXiv Detail & Related papers (2022-05-26T21:56:31Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Distributed Non-Convex Optimization with Sublinear Speedup under
Intermittent Client Availability [46.85205907718874]
Federated learning is a new machine learning framework, where a bunch of clients collaboratively train a model without sharing training data.
In this work, we consider a practical and issue when deploying federated learning in intermittent mobile environments.
We propose a simple distributed nonlinear optimization algorithm, called Federated Latest Averaging (FedLaAvg for short)
Our theoretical analysis shows that FedLaAvg attains the convergence rate of $(E1/2/(NT1/2)$, achieving a sublinear speed with respect to the total number of clients.
arXiv Detail & Related papers (2020-02-18T06:32:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.