Joint Device Scheduling and Resource Allocation for Latency Constrained
Wireless Federated Learning
- URL: http://arxiv.org/abs/2007.07174v1
- Date: Tue, 14 Jul 2020 16:46:47 GMT
- Title: Joint Device Scheduling and Resource Allocation for Latency Constrained
Wireless Federated Learning
- Authors: Wenqi Shi, Sheng Zhou, Zhisheng Niu, Miao Jiang, Lu Geng
- Abstract summary: In federated learning (FL), devices upload their local model updates via wireless channels.
We propose a joint device scheduling and resource allocation policy to maximize the model accuracy.
Experiments show that the proposed policy outperforms state-of-the-art scheduling policies.
- Score: 26.813145949399427
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning (FL), devices contribute to the global training by
uploading their local model updates via wireless channels. Due to limited
computation and communication resources, device scheduling is crucial to the
convergence rate of FL. In this paper, we propose a joint device scheduling and
resource allocation policy to maximize the model accuracy within a given total
training time budget for latency constrained wireless FL. A lower bound on the
reciprocal of the training performance loss, in terms of the number of training
rounds and the number of scheduled devices per round, is derived. Based on the
bound, the accuracy maximization problem is solved by decoupling it into two
sub-problems. First, given the scheduled devices, the optimal bandwidth
allocation suggests allocating more bandwidth to the devices with worse channel
conditions or weaker computation capabilities. Then, a greedy device scheduling
algorithm is introduced, which in each step selects the device consuming the
least updating time obtained by the optimal bandwidth allocation, until the
lower bound begins to increase, meaning that scheduling more devices will
degrade the model accuracy. Experiments show that the proposed policy
outperforms state-of-the-art scheduling policies under extensive settings of
data distributions and cell radius.
Related papers
- FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks [20.048146776405005]
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
arXiv Detail & Related papers (2024-04-23T07:48:17Z) - Device Scheduling for Relay-assisted Over-the-Air Aggregation in
Federated Learning [9.735236606901038]
Federated learning (FL) leverages data distributed at the edge of the network to enable intelligent applications.
In this paper, we propose a relay-assisted FL framework, and investigate the device scheduling problem in relay-assisted FL systems.
arXiv Detail & Related papers (2023-12-15T03:04:39Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z) - Convergence of Update Aware Device Scheduling for Federated Learning at
the Wireless Edge [84.55126371346452]
We study federated learning at the wireless edge, where power-limited devices with local datasets collaboratively train a joint model with the help of a remote parameter server (PS)
We design novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round.
The results of numerical experiments show that the proposed scheduling policy, based on both the channel conditions and the significance of the local model updates, provides a better long-term performance than scheduling policies based only on either of the two metrics individually.
arXiv Detail & Related papers (2020-01-28T15:15:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.