Greedy Shapley Client Selection for Communication-Efficient Federated
Learning
- URL: http://arxiv.org/abs/2312.09108v3
- Date: Wed, 7 Feb 2024 08:34:53 GMT
- Title: Greedy Shapley Client Selection for Communication-Efficient Federated
Learning
- Authors: Pranava Singhal, Shashi Raj Pandey, Petar Popovski
- Abstract summary: Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
- Score: 32.38170282930876
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard client selection algorithms for Federated Learning (FL) are
often unbiased and involve uniform random sampling of clients. This has been
proven sub-optimal for fast convergence under practical settings characterized
by significant heterogeneity in data distribution, computing, and communication
resources across clients. For applications having timing constraints due to
limited communication opportunities with the parameter server (PS), the client
selection strategy is critical to complete model training within the fixed
budget of communication rounds. To address this, we develop a biased client
selection strategy, GreedyFed, that identifies and greedily selects the most
contributing clients in each communication round. This method builds on a fast
approximation algorithm for the Shapley Value at the PS, making the computation
tractable for real-world applications with many clients. Compared to various
client selection strategies on several real-world datasets, GreedyFed
demonstrates fast and stable convergence with high accuracy under timing
constraints and when imposing a higher degree of heterogeneity in data
distribution, systems constraints, and privacy requirements.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Multi-Criteria Client Selection and Scheduling with Fairness Guarantee
for Federated Learning Service [17.986744632466515]
Federated Learning (FL) enables multiple clients to train machine learning models collaboratively without sharing the raw training data.
We propose a multi-criteria client selection and scheduling scheme with a fairness guarantee.
Our scheme can improve the model quality especially when data are non-iid.
arXiv Detail & Related papers (2023-12-05T16:56:24Z) - Intelligent Client Selection for Federated Learning using Cellular
Automata [0.5849783371898033]
FL has emerged as a promising solution for privacy-enhancement and latency in various real-world applications, such as transportation, communications, and healthcare.
We propose Cellular Automaton-based Client Selection (CA-CS) as a novel client selection algorithm.
Our results demonstrate that CA-CS achieves comparable accuracy to the random selection approach, while effectively avoiding high-latency Federated clients.
arXiv Detail & Related papers (2023-10-01T09:40:40Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Communication-Efficient Federated Learning with Accelerated Client Gradient [46.81082897703729]
Federated learning often suffers from slow and unstable convergence due to the heterogeneous characteristics of participating client datasets.
We propose a simple but effective federated learning framework, which improves the consistency across clients and facilitates the convergence of the server model.
We provide the theoretical convergence rate of our algorithm and demonstrate remarkable performance gains in terms of accuracy and communication efficiency.
arXiv Detail & Related papers (2022-01-10T05:31:07Z) - Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks [2.6774008509840996]
Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models.
This paper proposes a new client selection algorithm that aims to accelerate the convergence rate for obtaining specialized machine learning models.
arXiv Detail & Related papers (2021-08-16T21:38:22Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies [29.127689561987964]
Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
arXiv Detail & Related papers (2020-10-03T01:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.