Multi-Criteria Client Selection and Scheduling with Fairness Guarantee
for Federated Learning Service
- URL: http://arxiv.org/abs/2312.14941v1
- Date: Tue, 5 Dec 2023 16:56:24 GMT
- Title: Multi-Criteria Client Selection and Scheduling with Fairness Guarantee
for Federated Learning Service
- Authors: Meiying Zhang, Huan Zhao, Sheldon Ebron, Ruitao Xie, Kan Yang
- Abstract summary: Federated Learning (FL) enables multiple clients to train machine learning models collaboratively without sharing the raw training data.
We propose a multi-criteria client selection and scheduling scheme with a fairness guarantee.
Our scheme can improve the model quality especially when data are non-iid.
- Score: 17.986744632466515
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables multiple clients to train machine learning
models collaboratively without sharing the raw training data. However, for a
given FL task, how to select a group of appropriate clients fairly becomes a
challenging problem due to budget restrictions and client heterogeneity. In
this paper, we propose a multi-criteria client selection and scheduling scheme
with a fairness guarantee, comprising two stages: 1) preliminary client pool
selection, and 2) per-round client scheduling. Specifically, we first define a
client selection metric informed by several criteria, such as client resources,
data quality, and client behaviors. Then, we formulate the initial client pool
selection problem into an optimization problem that aims to maximize the
overall scores of selected clients within a given budget and propose a greedy
algorithm to solve it. To guarantee fairness, we further formulate the
per-round client scheduling problem and propose a heuristic algorithm to divide
the client pool into several subsets such that every client is selected at
least once while guaranteeing that the `integrated' dataset in a subset is
close to an independent and identical distribution (iid). Our experimental
results show that our scheme can improve the model quality especially when data
are non-iid.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Greedy Shapley Client Selection for Communication-Efficient Federated
Learning [32.38170282930876]
Standard client selection algorithms for Federated Learning (FL) are often unbiased and involve uniform random sampling of clients.
We develop a biased client selection strategy, GreedyFed, that identifies and greedily selects the most contributing clients in each communication round.
Compared to various client selection strategies on several real-world datasets, GreedyFed demonstrates fast and stable convergence with high accuracy under timing constraints.
arXiv Detail & Related papers (2023-12-14T16:44:38Z) - Near-optimal Differentially Private Client Selection in Federated
Settings [3.58439716487063]
We consider a federated network wherein clients coordinate with a central server to complete a task.
The clients decide whether to participate or not at a time step based on their preferences.
The developed algorithm provides near-optimal values to the clients over long-term average participation.
arXiv Detail & Related papers (2023-10-13T19:32:50Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Client Selection Approach in Support of Clustered Federated Learning
over Wireless Edge Networks [2.6774008509840996]
Clustered Federated Multitask Learning (CFL) was introduced as an efficient scheme to obtain reliable specialized models.
This paper proposes a new client selection algorithm that aims to accelerate the convergence rate for obtaining specialized machine learning models.
arXiv Detail & Related papers (2021-08-16T21:38:22Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.