Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning
- URL: http://arxiv.org/abs/2106.06796v1
- Date: Sat, 12 Jun 2021 15:18:48 GMT
- Title: Joint Client Scheduling and Resource Allocation under Channel
Uncertainty in Federated Learning
- Authors: Madhusanka Manimel Wadu, Sumudu Samarakoon, Mehdi Bennis
- Abstract summary: Federated learning (FL) over wireless networks depends on the reliability of the client-server connectivity and clients' local computation capabilities.
In this article, we investigate the problem of client scheduling and resource block (RB) allocation to enhance the performance of model training using FL.
A proposed method reduces the gap of the training accuracy loss by up to 40.7% compared to state-of-theart client scheduling and RB allocation methods.
- Score: 47.97586668316476
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of federated learning (FL) over wireless networks depend on
the reliability of the client-server connectivity and clients' local
computation capabilities. In this article we investigate the problem of client
scheduling and resource block (RB) allocation to enhance the performance of
model training using FL, over a pre-defined training duration under imperfect
channel state information (CSI) and limited local computing resources. First,
we analytically derive the gap between the training losses of FL with clients
scheduling and a centralized training method for a given training duration.
Then, we formulate the gap of the training loss minimization over client
scheduling and RB allocation as a stochastic optimization problem and solve it
using Lyapunov optimization. A Gaussian process regression-based channel
prediction method is leveraged to learn and track the wireless channel, in
which, the clients' CSI predictions and computing power are incorporated into
the scheduling decision. Using an extensive set of simulations, we validate the
robustness of the proposed method under both perfect and imperfect CSI over an
array of diverse data distributions. Results show that the proposed method
reduces the gap of the training accuracy loss by up to 40.7% compared to
state-of-theart client scheduling and RB allocation methods.
Related papers
- Online Client Scheduling and Resource Allocation for Efficient Federated Edge Learning [9.451084740123198]
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without sharing their raw data.
However, deploying FL over mobile edge networks with constrained resources such as power, bandwidth, and suffers from high training latency and low model accuracy.
This paper investigates the optimal client scheduling and resource allocation for FL over mobile edge networks under resource constraints and uncertainty.
arXiv Detail & Related papers (2024-09-29T01:56:45Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - On the Convergence Time of Federated Learning Over Wireless Networks
Under Imperfect CSI [28.782485580296374]
We propose a training process that takes channel statistics as a bias to minimize the convergence time under imperfect CSI.
We also examine the trade-off between number of clients involved in the training process and model accuracy as a function of different fading regimes.
arXiv Detail & Related papers (2021-04-01T08:30:45Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Coded Computing for Federated Learning at the Edge [3.385874614913973]
Federated Learning (FL) enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
Recent work proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server.
We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels.
arXiv Detail & Related papers (2020-07-07T08:20:47Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.