A Systematic Literature Review on Client Selection in Federated Learning
- URL: http://arxiv.org/abs/2306.04862v1
- Date: Thu, 8 Jun 2023 01:26:22 GMT
- Title: A Systematic Literature Review on Client Selection in Federated Learning
- Authors: Carl Smestad (1) and Jingyue Li (2) ((1) Norwegian University of
Science and Technology, (2) Norwegian University of Science and Technology)
- Abstract summary: federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server.
This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the arising concerns of privacy within machine learning, federated
learning (FL) was invented in 2017, in which the clients, such as mobile
devices, train a model and send the update to the centralized server. Choosing
clients randomly for FL can harm learning performance due to different reasons.
Many studies have proposed approaches to address the challenges of client
selection of FL. However, no systematic literature review (SLR) on this topic
existed. This SLR investigates the state of the art of client selection in FL
and answers the challenges, solutions, and metrics to evaluate the solutions.
We systematically reviewed 47 primary studies. The main challenges found in
client selection are heterogeneity, resource allocation, communication costs,
and fairness. The client selection schemes aim to improve the original random
selection algorithm by focusing on one or several of the aforementioned
challenges. The most common metric used is testing accuracy versus
communication rounds, as testing accuracy measures the successfulness of the
learning and preferably in as few communication rounds as possible, as they are
very expensive. Although several possible improvements can be made with the
current state of client selection, the most beneficial ones are evaluating the
impact of unsuccessful clients and gaining a more theoretical understanding of
the impact of fairness in FL.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Client Selection in Federated Learning: Principles, Challenges, and
Opportunities [15.33636272844544]
Federated Learning (FL) is a privacy-preserving paradigm for training Machine Learning (ML) models.
In a typical FL scenario, clients exhibit significant heterogeneity in terms of data distribution and hardware configurations.
Various client selection algorithms have been developed, showing promising performance improvement.
arXiv Detail & Related papers (2022-11-03T01:51:14Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - A Snapshot of the Frontiers of Client Selection in Federated Learning [5.098446527311984]
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning.
Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server.
FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance.
arXiv Detail & Related papers (2022-09-27T10:08:18Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Stochastic Client Selection for Federated Learning with Volatile Clients [41.591655430723186]
Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
arXiv Detail & Related papers (2020-11-17T16:35:24Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z) - Prophet: Proactive Candidate-Selection for Federated Learning by
Predicting the Qualities of Training and Reporting Phases [66.01459702625064]
In 5G networks, the training latency is still an obstacle preventing Federated Learning (FL) from being largely adopted.
One of the most fundamental problems that lead to large latency is the bad candidate-selection for FL.
In this paper, we study the proactive candidate-selection for FL in this paper.
arXiv Detail & Related papers (2020-02-03T06:40:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.