Dynamic Scheduling for Over-the-Air Federated Edge Learning with Energy
Constraints
- URL: http://arxiv.org/abs/2106.00490v1
- Date: Mon, 31 May 2021 08:55:02 GMT
- Title: Dynamic Scheduling for Over-the-Air Federated Edge Learning with Energy
Constraints
- Authors: Yuxuan Sun, Sheng Zhou, Zhisheng Niu, Deniz G\"und\"uz
- Abstract summary: We consider an over-the-air FEEL system with analog gradient aggregation.
We propose an energy-aware dynamic device scheduling algorithm to optimize the training performance.
Under a highly unbalanced local data distribution, the proposed algorithm can increase the accuracy by 4.9%.
- Score: 44.311278843238675
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Machine learning and wireless communication technologies are jointly
facilitating an intelligent edge, where federated edge learning (FEEL) is a
promising training framework. As wireless devices involved in FEEL are resource
limited in terms of communication bandwidth, computing power and battery
capacity, it is important to carefully schedule them to optimize the training
performance. In this work, we consider an over-the-air FEEL system with analog
gradient aggregation, and propose an energy-aware dynamic device scheduling
algorithm to optimize the training performance under energy constraints of
devices, where both communication energy for gradient aggregation and
computation energy for local training are included. The consideration of
computation energy makes dynamic scheduling challenging, as devices are
scheduled before local training, but the communication energy for over-the-air
aggregation depends on the l2-norm of local gradient, which is known after
local training. We thus incorporate estimation methods into scheduling to
predict the gradient norm. Taking the estimation error into account, we
characterize the performance gap between the proposed algorithm and its offline
counterpart. Experimental results show that, under a highly unbalanced local
data distribution, the proposed algorithm can increase the accuracy by 4.9% on
CIFAR-10 dataset compared with the myopic benchmark, while satisfying the
energy constraints.
Related papers
- Energy-Efficient Federated Edge Learning with Streaming Data: A Lyapunov Optimization Approach [34.00679567444125]
We develop a dynamic scheduling and resource allocation algorithm to address the inherent randomness in data arrivals and resource availability under long-term energy constraints.
Our proposed algorithm makes adaptive decisions on device scheduling, computational capacity adjustment, and allocation of bandwidth and transmit power in every round.
The effectiveness of our scheme is verified through simulation results, demonstrating improved learning performance and energy efficiency as compared to baseline schemes.
arXiv Detail & Related papers (2024-05-20T14:13:22Z) - Federated Learning With Energy Harvesting Devices: An MDP Framework [5.852486435612777]
Federated learning (FL) requires edge devices to perform local training and exchange information with a parameter server.
A critical challenge in practical FL systems is the rapid energy depletion of battery-limited edge devices.
We apply energy harvesting technique in FL systems to extract ambient energy for continuously powering edge devices.
arXiv Detail & Related papers (2024-05-17T03:41:40Z) - Rethinking Resource Management in Edge Learning: A Joint Pre-training and Fine-tuning Design Paradigm [87.47506806135746]
In some applications, edge learning is experiencing a shift in focusing from conventional learning from scratch to new two-stage learning.
This paper considers the problem of joint communication and computation resource management in a two-stage edge learning system.
It is shown that the proposed joint resource management over the pre-training and fine-tuning stages well balances the system performance trade-off.
arXiv Detail & Related papers (2024-04-01T00:21:11Z) - Lyapunov-Driven Deep Reinforcement Learning for Edge Inference Empowered
by Reconfigurable Intelligent Surfaces [30.1512069754603]
We propose a novel algorithm for energy-efficient, low-latency, accurate inference at the wireless edge.
We consider a scenario where new data are continuously generated/collected by a set of devices and are handled through a dynamic queueing system.
arXiv Detail & Related papers (2023-05-18T12:46:42Z) - Dynamic Scheduling for Federated Edge Learning with Streaming Data [56.91063444859008]
We consider a Federated Edge Learning (FEEL) system where training data are randomly generated over time at a set of distributed edge devices with long-term energy constraints.
Due to limited communication resources and latency requirements, only a subset of devices is scheduled for participating in the local training process in every iteration.
arXiv Detail & Related papers (2023-05-02T07:41:16Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Collaborative Learning over Wireless Networks: An Introductory Overview [84.09366153693361]
We will mainly focus on collaborative training across wireless devices.
Many distributed optimization algorithms have been developed over the last decades.
They provide data locality; that is, a joint model can be trained collaboratively while the data available at each participating device remains local.
arXiv Detail & Related papers (2021-12-07T20:15:39Z) - Wirelessly Powered Federated Edge Learning: Optimal Tradeoffs Between
Convergence and Power Transfer [42.30741737568212]
We propose the solution of powering devices using wireless power transfer (WPT)
This work aims at the derivation of guidelines on deploying the resultant wirelessly powered FEEL (WP-FEEL) system.
The results provide useful guidelines on WPT provisioning to provide a guaranteer on learning performance.
arXiv Detail & Related papers (2021-02-24T15:47:34Z) - To Talk or to Work: Flexible Communication Compression for Energy
Efficient Federated Learning over Heterogeneous Mobile Edge Devices [78.38046945665538]
federated learning (FL) over massive mobile edge devices opens new horizons for numerous intelligent mobile applications.
FL imposes huge communication and computation burdens on participating devices due to periodical global synchronization and continuous local training.
We develop a convergence-guaranteed FL algorithm enabling flexible communication compression.
arXiv Detail & Related papers (2020-12-22T02:54:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.