Service Delay Minimization for Federated Learning over Mobile Devices
- URL: http://arxiv.org/abs/2205.09868v1
- Date: Thu, 19 May 2022 21:25:02 GMT
- Title: Service Delay Minimization for Federated Learning over Mobile Devices
- Authors: Rui Chen, Dian Shi, Xiaoqi Qin, Dongjie Liu, Miao Pan, and Shuguang
Cui
- Abstract summary: Federated learning over mobile devices has fostered numerous intriguing applications/services.
We propose a service delay efficient FL (SDEFL) scheme over mobile devices.
- Score: 36.027677482303076
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) over mobile devices has fostered numerous intriguing
applications/services, many of which are delay-sensitive. In this paper, we
propose a service delay efficient FL (SDEFL) scheme over mobile devices. Unlike
traditional communication efficient FL, which regards wireless communications
as the bottleneck, we find that under many situations, the local computing
delay is comparable to the communication delay during the FL training process,
given the development of high-speed wireless transmission techniques. Thus, the
service delay in FL should be computing delay + communication delay over
training rounds. To minimize the service delay of FL, simply reducing local
computing/communication delay independently is not enough. The delay trade-off
between local computing and wireless communications must be considered.
Besides, we empirically study the impacts of local computing control and
compression strategies (i.e., the number of local updates, weight quantization,
and gradient quantization) on computing, communication and service delays.
Based on those trade-off observation and empirical studies, we develop an
optimization scheme to minimize the service delay of FL over heterogeneous
devices. We establish testbeds and conduct extensive emulations/experiments to
verify our theoretical analysis. The results show that SDEFL reduces notable
service delay with a small accuracy drop compared to peer designs.
Related papers
- Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - FLSTRA: Federated Learning in Stratosphere [22.313423693397556]
A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
arXiv Detail & Related papers (2023-02-01T00:52:55Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Towards Communication-Learning Trade-off for Federated Learning at the
Network Edge [5.267288702335319]
We propose a wireless learning (FL) system where network pruning is applied to local users with limited resources.
Although beneficial to FL latency, it also deteriorates information loss.
arXiv Detail & Related papers (2022-05-27T23:11:52Z) - Over-the-Air Federated Learning with Retransmissions (Extended Version) [21.37147806100865]
We study the impact of estimation errors on the convergence of Federated Learning (FL) over resource-constrained wireless networks.
We propose retransmissions as a method to improve FL convergence over resource-constrained wireless networks.
arXiv Detail & Related papers (2021-11-19T15:17:15Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.