Bandit-based Communication-Efficient Client Selection Strategies for
Federated Learning
- URL: http://arxiv.org/abs/2012.08009v1
- Date: Mon, 14 Dec 2020 23:35:03 GMT
- Title: Bandit-based Communication-Efficient Client Selection Strategies for
Federated Learning
- Authors: Yae Jee Cho, Samarth Gupta, Gauri Joshi, Osman Ya\u{g}an
- Abstract summary: We present a bandit-based communication-efficient client selection strategy UCB-CS that achieves faster convergence with lower communication overhead.
We also demonstrate how client selection can be used to improve fairness.
- Score: 8.627405016032615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to communication constraints and intermittent client availability in
federated learning, only a subset of clients can participate in each training
round. While most prior works assume uniform and unbiased client selection,
recent work on biased client selection has shown that selecting clients with
higher local losses can improve error convergence speed. However, previously
proposed biased selection strategies either require additional communication
cost for evaluating the exact local loss or utilize stale local loss, which can
even make the model diverge. In this paper, we present a bandit-based
communication-efficient client selection strategy UCB-CS that achieves faster
convergence with lower communication overhead. We also demonstrate how client
selection can be used to improve fairness.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - FedSS: Federated Learning with Smart Selection of clients [1.7265013728931]
Federated learning provides the ability to learn over heterogeneous user data in a distributed manner while preserving user privacy.
Our proposed idea looks to find a sweet spot between fast convergence and heterogeneity by looking at smart client selection and scheduling techniques.
arXiv Detail & Related papers (2022-07-10T23:55:47Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - FedGP: Correlation-Based Active Client Selection for Heterogeneous
Federated Learning [33.996041254246585]
We propose FedGP -- a federated learning framework built on a correlation-based client selection strategy.
We develop a GP training method utilizing the historical samples efficiently to reduce the communication cost.
Based on the correlations we learned, we derive the client selection with an enlarged reduction of expected global loss in each round.
arXiv Detail & Related papers (2021-03-24T03:25:14Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z) - Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies [29.127689561987964]
Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
arXiv Detail & Related papers (2020-10-03T01:04:17Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.