FedSS: Federated Learning with Smart Selection of clients
- URL: http://arxiv.org/abs/2207.04569v2
- Date: Tue, 26 Sep 2023 18:17:45 GMT
- Title: FedSS: Federated Learning with Smart Selection of clients
- Authors: Ammar Tahir, Yongzhou Chen, Prashanti Nilayam
- Abstract summary: Federated learning provides the ability to learn over heterogeneous user data in a distributed manner while preserving user privacy.
Our proposed idea looks to find a sweet spot between fast convergence and heterogeneity by looking at smart client selection and scheduling techniques.
- Score: 1.7265013728931
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning provides the ability to learn over heterogeneous user data
in a distributed manner while preserving user privacy. However, its current
client selection technique is a source of bias as it discriminates against slow
clients. For starters, it selects clients that satisfy certain network and
system-specific criteria, thus not selecting slow clients. Even when such
clients are included in the training process, they either struggle with the
training or are dropped altogether for being too slow. Our proposed idea looks
to find a sweet spot between fast convergence and heterogeneity by looking at
smart client selection and scheduling techniques.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - Provably Personalized and Robust Federated Learning [47.50663360022456]
We propose simple algorithms which identify clusters of similar clients and train a personalized modelper-cluster.
The convergence rates of our algorithmsally match those obtained if we knew the true underlying clustering of the clients and are provably robust in the Byzantine setting.
arXiv Detail & Related papers (2023-06-14T09:37:39Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - MDA: Availability-Aware Federated Learning Client Selection [1.9422756778075616]
This study focuses on an FL setting called cross-device FL, which trains based on a large number of clients.
In vanilla FL, clients are selected randomly, which results in an acceptable accuracy but is not ideal from the overall training time perspective.
New client selection techniques have been proposed to improve the training time by considering individual clients' resources and speed.
arXiv Detail & Related papers (2022-11-25T22:18:24Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A Snapshot of the Frontiers of Client Selection in Federated Learning [5.098446527311984]
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning.
Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server.
FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance.
arXiv Detail & Related papers (2022-09-27T10:08:18Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Bandit-based Communication-Efficient Client Selection Strategies for
Federated Learning [8.627405016032615]
We present a bandit-based communication-efficient client selection strategy UCB-CS that achieves faster convergence with lower communication overhead.
We also demonstrate how client selection can be used to improve fairness.
arXiv Detail & Related papers (2020-12-14T23:35:03Z) - Client Selection in Federated Learning: Convergence Analysis and
Power-of-Choice Selection Strategies [29.127689561987964]
Federated learning enables a large number of resource-limited client nodes to cooperatively train a model without data sharing.
We show that biasing client selection towards clients with higher local loss achieves faster error convergence.
We propose Power-of-Choice, a communication- and computation-efficient client selection framework.
arXiv Detail & Related papers (2020-10-03T01:04:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.