Stochastic Client Selection for Federated Learning with Volatile Clients
- URL: http://arxiv.org/abs/2011.08756v3
- Date: Sat, 12 Feb 2022 08:31:52 GMT
- Title: Stochastic Client Selection for Federated Learning with Volatile Clients
- Authors: Tiansheng Huang, Weiwei Lin, Li Shen, Keqin Li, and Albert Y. Zomaya
- Abstract summary: Federated Learning (FL) is a privacy-preserving machine learning paradigm.
In each round of synchronous FL training, only a fraction of available clients are chosen to participate.
We propose E3CS, a client selection scheme to solve the problem.
- Score: 41.591655430723186
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL), arising as a privacy-preserving machine learning
paradigm, has received notable attention from the public. In each round of
synchronous FL training, only a fraction of available clients are chosen to
participate, and the selection decision might have a significant effect on the
training efficiency, as well as the final model performance. In this paper, we
investigate the client selection problem under a volatile context, in which the
local training of heterogeneous clients is likely to fail due to various kinds
of reasons and in different levels of frequency. {\color{black}Intuitively, too
much training failure might potentially reduce the training efficiency, while
too much selection on clients with greater stability might introduce bias,
thereby resulting in degradation of the training effectiveness. To tackle this
tradeoff, we in this paper formulate the client selection problem under joint
consideration of effective participation and fairness.} Further, we propose
E3CS, a stochastic client selection scheme to solve the problem, and we
corroborate its effectiveness by conducting real data-based experiments.
According to our experimental results, the proposed selection scheme is able to
achieve up to 2x faster convergence to a fixed model accuracy while maintaining
the same level of final model accuracy, compared with the state-of-the-art
selection schemes.
Related papers
- Submodular Maximization Approaches for Equitable Client Selection in Federated Learning [4.167345675621377]
In a conventional Learning framework, client selection for training typically involves the random sampling of a subset of clients in each iteration.
This paper introduces two novel methods, namely SUBTRUNC and UNIONFL, designed to address the limitations of random client selection.
arXiv Detail & Related papers (2024-08-24T22:40:31Z) - Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Ranking-based Client Selection with Imitation Learning for Efficient Federated Learning [20.412469498888292]
Federated Learning (FL) enables multiple devices to collaboratively train a shared model.
The selection of participating devices in each training round critically affects both the model performance and training efficiency.
We introduce a novel device selection solution called FedRank, which is an end-to-end, ranking-based approach.
arXiv Detail & Related papers (2024-05-07T08:44:29Z) - Joint Probability Selection and Power Allocation for Federated Learning [2.9364773826704993]
We study the performance of federated learning over wireless networks, where devices with a limited energy budget train a machine learning model.
We formulate a new probabilistic approach to jointly select clients and allocate power optimally.
Our numerical results show that the proposed approach achieves a significant performance in terms of energy consumption, completion time and accuracy as compared to the studied benchmarks.
arXiv Detail & Related papers (2024-01-15T15:09:47Z) - FilFL: Client Filtering for Optimized Client Participation in Federated Learning [71.46173076298957]
Federated learning enables clients to collaboratively train a model without exchanging local data.
Clients participating in the training process significantly impact the convergence rate, learning efficiency, and model generalization.
We propose a novel approach, client filtering, to improve model generalization and optimize client participation and training.
arXiv Detail & Related papers (2023-02-13T18:55:31Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee [36.07970788489]
Federated Learning is a new paradigm to cope with the privacy issue by allowing clients to perform model training locally.
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.
In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time.
arXiv Detail & Related papers (2020-11-03T15:27:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.