Wireless Quantized Federated Learning: A Joint Computation and
Communication Design
- URL: http://arxiv.org/abs/2203.05878v1
- Date: Fri, 11 Mar 2022 12:30:08 GMT
- Title: Wireless Quantized Federated Learning: A Joint Computation and
Communication Design
- Authors: Pavlos S. Bouzinis, Panagiotis D. Diamantoulakis, and George K.
Karagiannidis
- Abstract summary: In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission.
We jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds.
- Score: 36.35684767732552
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, federated learning (FL) has sparked widespread attention as a
promising decentralized machine learning approach which provides privacy and
low delay. However, communication bottleneck still constitutes an issue, that
needs to be resolved for an efficient deployment of FL over wireless networks.
In this paper, we aim to minimize the total convergence time of FL, by
quantizing the local model parameters prior to uplink transmission. More
specifically, the convergence analysis of the FL algorithm with stochastic
quantization is firstly presented, which reveals the impact of the quantization
error on the convergence rate. Following that, we jointly optimize the
computing, communication resources and number of quantization bits, in order to
guarantee minimized convergence time across all global rounds, subject to
energy and quantization error requirements, which stem from the convergence
analysis. The impact of the quantization error on the convergence time is
evaluated and the trade-off among model accuracy and timely execution is
revealed. Moreover, the proposed method is shown to result in faster
convergence in comparison with baseline schemes. Finally, useful insights for
the selection of the quantization error tolerance are provided.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Asynchronous Federated Learning with Bidirectional Quantized
Communications and Buffered Aggregation [39.057968279167966]
Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a state-of-the-art algorithm known for its efficiency and high scalability.
We present a new algorithm (QAFeL) with a quantization scheme that establishes a shared "hidden" state between the server and clients to avoid the error propagation caused by direct quantization.
arXiv Detail & Related papers (2023-08-01T03:50:58Z) - Optimal Privacy Preserving for Federated Learning in Mobile Edge
Computing [35.57643489979182]
Federated Learning (FL) with quantization and deliberately added noise over wireless networks is a promising approach to preserve user differential privacy (DP)
This article aims to jointly optimize the quantization and Binomial mechanism parameters and communication resources to maximize the convergence rate under the constraints of the wireless network and DP requirement.
arXiv Detail & Related papers (2022-11-14T07:54:14Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Time-triggered Federated Learning over Wireless Networks [48.389824560183776]
We present a time-triggered FL algorithm (TT-Fed) over wireless networks.
Our proposed TT-Fed algorithm improves the converged test accuracy by up to 12.5% and 5%, respectively.
arXiv Detail & Related papers (2022-04-26T16:37:29Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - Design and Analysis of Uplink and Downlink Communications for Federated
Learning [18.634770589573733]
Communication has been known to be one of the primary bottlenecks of federated learning (FL)
We focus on the design and analysis of physical layer quantization and transmission methods for wireless FL.
arXiv Detail & Related papers (2020-12-07T21:01:11Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.