Client Selection in Federated Learning: Principles, Challenges, and
Opportunities
- URL: http://arxiv.org/abs/2211.01549v2
- Date: Wed, 26 Jul 2023 15:15:58 GMT
- Title: Client Selection in Federated Learning: Principles, Challenges, and
Opportunities
- Authors: Lei Fu and Huanle Zhang and Ge Gao and Mi Zhang and Xin Liu
- Abstract summary: Federated Learning (FL) is a privacy-preserving paradigm for training Machine Learning (ML) models.
In a typical FL scenario, clients exhibit significant heterogeneity in terms of data distribution and hardware configurations.
Various client selection algorithms have been developed, showing promising performance improvement.
- Score: 15.33636272844544
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As a privacy-preserving paradigm for training Machine Learning (ML) models,
Federated Learning (FL) has received tremendous attention from both industry
and academia. In a typical FL scenario, clients exhibit significant
heterogeneity in terms of data distribution and hardware configurations. Thus,
randomly sampling clients in each training round may not fully exploit the
local updates from heterogeneous clients, resulting in lower model accuracy,
slower convergence rate, degraded fairness, etc. To tackle the FL client
heterogeneity problem, various client selection algorithms have been developed,
showing promising performance improvement. In this paper, we systematically
present recent advances in the emerging field of FL client selection and its
challenges and research opportunities. We hope to facilitate practitioners in
choosing the most suitable client selection mechanisms for their applications,
as well as inspire researchers and newcomers to better understand this exciting
research topic.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - FLASH: Federated Learning Across Simultaneous Heterogeneities [54.80435317208111]
FLASH(Federated Learning Across Simultaneous Heterogeneities) is a lightweight and flexible client selection algorithm.
It outperforms state-of-the-art FL frameworks under extensive sources of Heterogeneities.
It achieves substantial and consistent improvements over state-of-the-art baselines.
arXiv Detail & Related papers (2024-02-13T20:04:39Z) - Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning [14.866327821524854]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - A Systematic Literature Review on Client Selection in Federated Learning [0.0]
federated learning (FL) was invented in 2017, in which the clients, such as mobile devices, train a model and send the update to the centralized server.
This SLR investigates the state of the art of client selection in FL and answers the challenges, solutions, and metrics to evaluate the solutions.
arXiv Detail & Related papers (2023-06-08T01:26:22Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - A Snapshot of the Frontiers of Client Selection in Federated Learning [5.098446527311984]
Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning.
Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server.
FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance.
arXiv Detail & Related papers (2022-09-27T10:08:18Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.