A Snapshot of the Frontiers of Client Selection in Federated Learning
- URL: http://arxiv.org/abs/2210.04607v1
- Date: Tue, 27 Sep 2022 10:08:18 GMT
- Title: A Snapshot of the Frontiers of Client Selection in Federated Learning
- Authors: Gergely D\'aniel N\'emeth, Miguel \'Angel Lozano, Novi Quadrianto,
Nuria Oliver
- Abstract summary: Federated learning (FL) has been proposed as a privacy-preserving approach in distributed machine learning.
Clients are able to keep their data in their local machines and only share their locally trained model's parameters with a central server.
FL has delivered promising results in real-life scenarios, such as healthcare, energy, and finance.
- Score: 5.098446527311984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) has been proposed as a privacy-preserving approach in
distributed machine learning. A federated learning architecture consists of a
central server and a number of clients that have access to private, potentially
sensitive data. Clients are able to keep their data in their local machines and
only share their locally trained model's parameters with a central server that
manages the collaborative learning process. FL has delivered promising results
in real-life scenarios, such as healthcare, energy, and finance. However, when
the number of participating clients is large, the overhead of managing the
clients slows down the learning. Thus, client selection has been introduced as
a strategy to limit the number of communicating parties at every step of the
process. Since the early na\"{i}ve random selection of clients, several client
selection methods have been proposed in the literature. Unfortunately, given
that this is an emergent field, there is a lack of a taxonomy of client
selection methods, making it hard to compare approaches. In this paper, we
propose a taxonomy of client selection in Federated Learning that enables us to
shed light on current progress in the field and identify potential areas of
future research in this promising area of machine learning.
Related papers
- Cohort Squeeze: Beyond a Single Communication Round per Cohort in Cross-Device Federated Learning [51.560590617691005]
We investigate whether it is possible to squeeze more juice" out of each cohort than what is possible in a single communication round.
Our approach leads to up to 74% reduction in the total communication cost needed to train a FL model in the cross-device setting.
arXiv Detail & Related papers (2024-06-03T08:48:49Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - A Comprehensive Survey On Client Selections in Federated Learning [3.438094543455187]
The selection of clients to participate in the training process is a critical factor for the performance of the overall system.
We provide a comprehensive overview of the state-of-the-art client selection techniques in Federated Learning.
arXiv Detail & Related papers (2023-11-12T10:40:43Z) - Heterogeneity-Guided Client Sampling: Towards Fast and Efficient Non-IID Federated Learning [14.866327821524854]
HiCS-FL is a novel client selection method in which the server estimates statistical heterogeneity of a client's data using the client's update of the network's output layer.
In non-IID settings HiCS-FL achieves faster convergence than state-of-the-art FL client selection schemes.
arXiv Detail & Related papers (2023-09-30T00:29:30Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Client Selection in Federated Learning: Principles, Challenges, and
Opportunities [15.33636272844544]
Federated Learning (FL) is a privacy-preserving paradigm for training Machine Learning (ML) models.
In a typical FL scenario, clients exhibit significant heterogeneity in terms of data distribution and hardware configurations.
Various client selection algorithms have been developed, showing promising performance improvement.
arXiv Detail & Related papers (2022-11-03T01:51:14Z) - Federated Select: A Primitive for Communication- and Memory-Efficient
Federated Learning [4.873569522869751]
Federated learning (FL) is a framework for machine learning across heterogeneous client devices.
We propose a more general procedure in which clients "select" what values are sent to them.
This allows clients to operate on smaller, data-dependent slices.
arXiv Detail & Related papers (2022-08-19T16:26:03Z) - FedSS: Federated Learning with Smart Selection of clients [1.7265013728931]
Federated learning provides the ability to learn over heterogeneous user data in a distributed manner while preserving user privacy.
Our proposed idea looks to find a sweet spot between fast convergence and heterogeneity by looking at smart client selection and scheduling techniques.
arXiv Detail & Related papers (2022-07-10T23:55:47Z) - Federated Multi-Target Domain Adaptation [99.93375364579484]
Federated learning methods enable us to train machine learning models on distributed user data while preserving its privacy.
We consider a more practical scenario where the distributed client data is unlabeled, and a centralized labeled dataset is available on the server.
We propose an effective DualAdapt method to address the new challenges.
arXiv Detail & Related papers (2021-08-17T17:53:05Z) - Exploiting Shared Representations for Personalized Federated Learning [54.65133770989836]
We propose a novel federated learning framework and algorithm for learning a shared data representation across clients and unique local heads for each client.
Our algorithm harnesses the distributed computational power across clients to perform many local-updates with respect to the low-dimensional local parameters for every update of the representation.
This result is of interest beyond federated learning to a broad class of problems in which we aim to learn a shared low-dimensional representation among data distributions.
arXiv Detail & Related papers (2021-02-14T05:36:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.