A Comprehensive Survey On Client Selections in Federated Learning
- URL: http://arxiv.org/abs/2311.06801v1
- Date: Sun, 12 Nov 2023 10:40:43 GMT
- Title: A Comprehensive Survey On Client Selections in Federated Learning
- Authors: Ala Gouissem and Zina Chkirbene and Ridha Hamila
- Abstract summary: The selection of clients to participate in the training process is a critical factor for the performance of the overall system.
We provide a comprehensive overview of the state-of-the-art client selection techniques in Federated Learning.
- Score: 3.438094543455187
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a rapidly growing field in machine learning that
allows data to be trained across multiple decentralized devices. The selection
of clients to participate in the training process is a critical factor for the
performance of the overall system. In this survey, we provide a comprehensive
overview of the state-of-the-art client selection techniques in FL, including
their strengths and limitations, as well as the challenges and open issues that
need to be addressed. We cover conventional selection techniques such as random
selection where all or partial random of clients is used for the trained. We
also cover performance-aware selections and as well as resource-aware
selections for resource-constrained networks and heterogeneous networks. We
also discuss the usage of client selection in model security enhancement.
Lastly, we discuss open issues and challenges related to clients selection in
dynamic constrained, and heterogeneous networks.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Joint Probability Selection and Power Allocation for Federated Learning [2.9364773826704993]
We study the performance of federated learning over wireless networks, where devices with a limited energy budget train a machine learning model.
We formulate a new probabilistic approach to jointly select clients and allocate power optimally.
Our numerical results show that the proposed approach achieves a significant performance in terms of energy consumption, completion time and accuracy as compared to the studied benchmarks.
arXiv Detail & Related papers (2024-01-15T15:09:47Z) - A Systematic Literature Review on Client Selection in Federated Learning [0.0]
federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server.
This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions.
arXiv Detail & Related papers (2023-06-08T01:26:22Z) - Knowledge-Aware Federated Active Learning with Non-IID Data [75.98707107158175]
We propose a federated active learning paradigm to efficiently learn a global model with limited annotation budget.
The main challenge faced by federated active learning is the mismatch between the active sampling goal of the global model on the server and that of the local clients.
We propose Knowledge-Aware Federated Active Learning (KAFAL), which consists of Knowledge-Specialized Active Sampling (KSAS) and Knowledge-Compensatory Federated Update (KCFU)
arXiv Detail & Related papers (2022-11-24T13:08:43Z) - Client Selection in Federated Learning: Principles, Challenges, and
Opportunities [15.33636272844544]
Federated Learning (FL) is a privacy-preserving paradigm for training Machine Learning (ML) models.
In a typical FL scenario, clients exhibit significant heterogeneity in terms of data distribution and hardware configurations.
Various client selection algorithms have been developed, showing promising performance improvement.
arXiv Detail & Related papers (2022-11-03T01:51:14Z) - FedMint: Intelligent Bilateral Client Selection in Federated Learning
with Newcomer IoT Devices [33.4117184364721]
Federated Learning is a novel distributed privacy-preserving learning paradigm.
It enables the collaboration among several participants (e.g., Internet of Things devices) for the training of machine learning models.
We present FedMint, an intelligent client selection approach for federated learning on IoT devices using game theory and bootstrapping mechanism.
arXiv Detail & Related papers (2022-10-31T12:48:56Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A Snapshot of the Frontiers of Client Selection in Federated Learning [5.098446527311984]
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning.
Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server.
FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance.
arXiv Detail & Related papers (2022-09-27T10:08:18Z) - A Survey on Participant Selection for Federated Learning in Mobile
Networks [47.88372677863646]
Federated Learning (FL) is an efficient distributed machine learning paradigm that employs private datasets in a privacy-preserving manner.
Due to limited communication bandwidth and unstable availability of such devices in a mobile network, only a fraction of end devices can be selected in each round.
arXiv Detail & Related papers (2022-07-08T04:22:48Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - Federated Continual Learning with Weighted Inter-client Transfer [79.93004004545736]
We propose a novel federated continual learning framework, Federated Weighted Inter-client Transfer (FedWeIT)
FedWeIT decomposes the network weights into global federated parameters and sparse task-specific parameters, and each client receives selective knowledge from other clients.
We validate our FedWeIT against existing federated learning and continual learning methods, and our model significantly outperforms them with a large reduction in the communication cost.
arXiv Detail & Related papers (2020-03-06T13:33:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.