Convergence of Update Aware Device Scheduling for Federated Learning at
the Wireless Edge
- URL: http://arxiv.org/abs/2001.10402v2
- Date: Fri, 8 May 2020 11:18:57 GMT
- Title: Convergence of Update Aware Device Scheduling for Federated Learning at
the Wireless Edge
- Authors: Mohammad Mohammadi Amiri, Deniz Gunduz, Sanjeev R. Kulkarni, H.
Vincent Poor
- Abstract summary: We study federated learning at the wireless edge, where power-limited devices with local datasets collaboratively train a joint model with the help of a remote parameter server (PS)
We design novel scheduling and resource allocation policies that decide on the subset of the devices to transmit at each round.
The results of numerical experiments show that the proposed scheduling policy, based on both the channel conditions and the significance of the local model updates, provides a better long-term performance than scheduling policies based only on either of the two metrics individually.
- Score: 84.55126371346452
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study federated learning (FL) at the wireless edge, where power-limited
devices with local datasets collaboratively train a joint model with the help
of a remote parameter server (PS). We assume that the devices are connected to
the PS through a bandwidth-limited shared wireless channel. At each iteration
of FL, a subset of the devices are scheduled to transmit their local model
updates to the PS over orthogonal channel resources, while each participating
device must compress its model update to accommodate to its link capacity. We
design novel scheduling and resource allocation policies that decide on the
subset of the devices to transmit at each round, and how the resources should
be allocated among the participating devices, not only based on their channel
conditions, but also on the significance of their local model updates. We then
establish convergence of a wireless FL algorithm with device scheduling, where
devices have limited capacity to convey their messages. The results of
numerical experiments show that the proposed scheduling policy, based on both
the channel conditions and the significance of the local model updates,
provides a better long-term performance than scheduling policies based only on
either of the two metrics individually. Furthermore, we observe that when the
data is independent and identically distributed (i.i.d.) across devices,
selecting a single device at each round provides the best performance, while
when the data distribution is non-i.i.d., scheduling multiple devices at each
round improves the performance. This observation is verified by the convergence
result, which shows that the number of scheduled devices should increase for a
less diverse and more biased data distribution.
Related papers
- Channel and Gradient-Importance Aware Device Scheduling for Over-the-Air
Federated Learning [31.966999085992505]
Federated learning (FL) is a privacy-preserving distributed training scheme.
We propose a device scheduling framework for over-the-air FL, named PO-FL, to mitigate the negative impact of channel noise distortion.
arXiv Detail & Related papers (2023-05-26T12:04:59Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Parallel Successive Learning for Dynamic Distributed Model Training over
Heterogeneous Wireless Networks [50.68446003616802]
Federated learning (FedL) has emerged as a popular technique for distributing model training over a set of wireless devices.
We develop parallel successive learning (PSL), which expands the FedL architecture along three dimensions.
Our analysis sheds light on the notion of cold vs. warmed up models, and model inertia in distributed machine learning.
arXiv Detail & Related papers (2022-02-07T05:11:01Z) - Federated Learning with Downlink Device Selection [92.14944020945846]
We study federated edge learning, where a global model is trained collaboratively using privacy-sensitive data at the edge of a wireless network.
A parameter server (PS) keeps track of the global model and shares it with the wireless edge devices for training using their private local data.
We consider device selection based on downlink channels over which the PS shares the global model with the devices.
arXiv Detail & Related papers (2021-07-07T22:42:39Z) - Data-Aware Device Scheduling for Federated Edge Learning [5.521735057483887]
Federated Edge Learning (FEEL) involves the collaborative training of machine learning models among edge devices.
We propose a new scheduling scheme for non-independent and-identically-distributed (non-IID) and unbalanced datasets in FEEL.
We show that our proposed FEEL scheduling algorithm can help achieve high accuracy in few rounds with a reduced cost.
arXiv Detail & Related papers (2021-02-18T17:17:56Z) - Convergence of Federated Learning over a Noisy Downlink [84.55126371346452]
We study federated learning, where power-limited wireless devices utilize their local datasets to collaboratively train a global model with the help of a remote parameter server.
This framework requires downlink transmission from the PS to the devices and uplink transmission from the devices to the PS.
The goal of this study is to investigate the impact of the bandwidth-limited shared wireless medium in both the downlink and uplink on the performance of FL.
arXiv Detail & Related papers (2020-08-25T16:15:05Z) - Joint Device Scheduling and Resource Allocation for Latency Constrained
Wireless Federated Learning [26.813145949399427]
In federated learning (FL), devices upload their local model updates via wireless channels.
We propose a joint device scheduling and resource allocation policy to maximize the model accuracy.
Experiments show that the proposed policy outperforms state-of-the-art scheduling policies.
arXiv Detail & Related papers (2020-07-14T16:46:47Z) - Scheduling Policy and Power Allocation for Federated Learning in NOMA
Based MEC [21.267954799102874]
Federated learning (FL) is a highly pursued machine learning technique that can train a model centrally while keeping data distributed.
We propose a new scheduling policy and power allocation scheme using non-orthogonal multiple access (NOMA) settings to maximize the weighted sum data rate.
Simulation results show that the proposed scheduling and power allocation scheme can help achieve a higher FL testing accuracy in NOMA based wireless networks.
arXiv Detail & Related papers (2020-06-21T23:07:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.