Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies
- URL: http://arxiv.org/abs/2010.01243v1
- Date: Sat, 3 Oct 2020 01:04:17 GMT
- Title: Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies
- Authors: Yae Jee Cho and Jianyu Wang and Gauri Joshi
- Abstract summary: Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
- Score: 29.127689561987964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning is a distributed optimization paradigm that enables a
large number of resource-limited client nodes to cooperatively train a model
without data sharing. Several works have analyzed the convergence of federated
learning by accounting of data heterogeneity, communication and computation
limitations, and partial client participation. However, they assume unbiased
client participation, where clients are selected at random or in proportion of
their data sizes. In this paper, we present the first convergence analysis of
federated optimization for biased client selection strategies, and quantify how
the selection bias affects convergence speed. We reveal that biasing client
selection towards clients with higher local loss achieves faster error
convergence. Using this insight, we propose Power-of-Choice, a communication-
and computation-efficient client selection framework that can flexibly span the
trade-off between convergence speed and solution bias. Our experiments
demonstrate that Power-of-Choice strategies converge up to 3 $\times$ faster
and give $10$% higher test accuracy than the baseline random selection.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - FedGP: Correlation-Based Active Client Selection for Heterogeneous
Federated Learning [33.996041254246585]
We propose FedGP -- a federated learning framework built on a correlation-based client selection strategy.
We develop a GP training method utilizing the historical samples efficiently to reduce the communication cost.
Based on the correlations we learned, we derive the client selection with an enlarged reduction of expected global loss in each round.
arXiv Detail & Related papers (2021-03-24T03:25:14Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Bandit-based Communication-Efficient Client Selection Strategies for
Federated Learning [8.627405016032615]
We present a bandit-based communication-efficient client selection strategy UCB-CS that achieves faster convergence with lower communication overhead.
We also demonstrate how client selection can be used to improve fairness.
arXiv Detail & Related papers (2020-12-14T23:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.