Multi-Armed Bandit Based Client Scheduling for Federated Learning
- URL: http://arxiv.org/abs/2007.02315v1
- Date: Sun, 5 Jul 2020 12:32:32 GMT
- Title: Multi-Armed Bandit Based Client Scheduling for Federated Learning
- Authors: Wenchao Xia, Tony Q. S. Quek, Kun Guo, Wanli Wen, Howard H. Yang,
Hongbo Zhu
- Abstract summary: federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
- Score: 91.91224642616882
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By exploiting the computing power and local data of distributed clients,
federated learning (FL) features ubiquitous properties such as reduction of
communication overhead and preserving data privacy. In each communication round
of FL, the clients update local models based on their own data and upload their
local updates via wireless channels. However, latency caused by hundreds to
thousands of communication rounds remains a bottleneck in FL. To minimize the
training latency, this work provides a multi-armed bandit-based framework for
online client scheduling (CS) in FL without knowing wireless channel state
information and statistical characteristics of clients. Firstly, we propose a
CS algorithm based on the upper confidence bound policy (CS-UCB) for ideal
scenarios where local datasets of clients are independent and identically
distributed (i.i.d.) and balanced. An upper bound of the expected performance
regret of the proposed CS-UCB algorithm is provided, which indicates that the
regret grows logarithmically over communication rounds. Then, to address
non-ideal scenarios with non-i.i.d. and unbalanced properties of local datasets
and varying availability of clients, we further propose a CS algorithm based on
the UCB policy and virtual queue technique (CS-UCB-Q). An upper bound is also
derived, which shows that the expected performance regret of the proposed
CS-UCB-Q algorithm can have a sub-linear growth over communication rounds under
certain conditions. Besides, the convergence performance of FL training is also
analyzed. Finally, simulation results validate the efficiency of the proposed
algorithms.
Related papers
- Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - Adaptive Control of Client Selection and Gradient Compression for
Efficient Federated Learning [28.185096784982544]
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data.
We propose a heterogeneous-aware FL framework, called FedCG, with adaptive client selection and gradient compression.
Experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$times$ speedup compared to other methods.
arXiv Detail & Related papers (2022-12-19T14:19:07Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Low-Latency Federated Learning over Wireless Channels with Differential
Privacy [142.5983499872664]
In federated learning (FL), model training is distributed over clients and local models are aggregated by a central server.
In this paper, we aim to minimize FL training delay over wireless channels, constrained by overall training performance as well as each client's differential privacy (DP) requirement.
arXiv Detail & Related papers (2021-06-20T13:51:18Z) - Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning [47.97586668316476]
Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
arXiv Detail & Related papers (2021-06-12T15:18:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.