Design and Analysis of Uplink and Downlink Communications for Federated
Learning
- URL: http://arxiv.org/abs/2012.04057v1
- Date: Mon, 7 Dec 2020 21:01:11 GMT
- Title: Design and Analysis of Uplink and Downlink Communications for Federated
Learning
- Authors: Sihui Zheng, Cong Shen, Xiang Chen
- Abstract summary: Communication has been known to be one of the primary bottlenecks of federated learning (FL)
We focus on the design and analysis of physical layer quantization and transmission methods for wireless FL.
- Score: 18.634770589573733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication has been known to be one of the primary bottlenecks of
federated learning (FL), and yet existing studies have not addressed the
efficient communication design, particularly in wireless FL where both uplink
and downlink communications have to be considered. In this paper, we focus on
the design and analysis of physical layer quantization and transmission methods
for wireless FL. We answer the question of what and how to communicate between
clients and the parameter server and evaluate the impact of the various
quantization and transmission options of the updated model on the learning
performance. We provide new convergence analysis of the well-known FedAvg under
non-i.i.d. dataset distributions, partial clients participation, and
finite-precision quantization in uplink and downlink communications. These
analyses reveal that, in order to achieve an O(1/T) convergence rate with
quantization, transmitting the weight requires increasing the quantization
level at a logarithmic rate, while transmitting the weight differential can
keep a constant quantization level. Comprehensive numerical evaluation on
various real-world datasets reveals that the benefit of a FL-tailored uplink
and downlink communication design is enormous - a carefully designed
quantization and transmission achieves more than 98% of the floating-point
baseline accuracy with fewer than 10% of the baseline bandwidth, for majority
of the experiments on both i.i.d. and non-i.i.d. datasets. In particular, 1-bit
quantization (3.1% of the floating-point baseline bandwidth) achieves 99.8% of
the floating-point baseline accuracy at almost the same convergence rate on
MNIST, representing the best known bandwidth-accuracy tradeoff to the best of
the authors' knowledge.
Related papers
- Rate-Constrained Quantization for Communication-Efficient Federated Learning [5.632231145349047]
We develop a novel quantized FL framework, called textbfrate-textbfconstrained textbffederated learning (RC-FED)
We formulate this scheme, as a joint optimization in which the quantization distortion is minimized while the rate of encoded gradients is kept below a target threshold.
We analyze the convergence behavior of RC-FED, and show its superior performance against baseline quantized FL schemes on several datasets.
arXiv Detail & Related papers (2024-09-10T08:22:01Z) - FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization [11.673528138087244]
Federated learning (FL) is a powerful machine learning paradigm which leverages the data as well as the computational resources of clients, while protecting clients' data privacy.
Previous research has primarily focused on the uplink communication, employing either fixed-bit quantization or adaptive quantization methods.
In this work, we introduce a holistic approach by joint uplink and downlink adaptive quantization to reduce the communication overhead.
arXiv Detail & Related papers (2024-06-26T08:14:23Z) - Federated Quantum Long Short-term Memory (FedQLSTM) [58.50321380769256]
Quantum federated learning (QFL) can facilitate collaborative learning across multiple clients using quantum machine learning (QML) models.
No prior work has focused on developing a QFL framework that utilizes temporal data to approximate functions.
A novel QFL framework that is the first to integrate quantum long short-term memory (QLSTM) models with temporal data is proposed.
arXiv Detail & Related papers (2023-12-21T21:40:47Z) - Scaling Limits of Quantum Repeater Networks [62.75241407271626]
Quantum networks (QNs) are a promising platform for secure communications, enhanced sensing, and efficient distributed quantum computing.
Due to the fragile nature of quantum states, these networks face significant challenges in terms of scalability.
In this paper, the scaling limits of quantum repeater networks (QRNs) are analyzed.
arXiv Detail & Related papers (2023-05-15T14:57:01Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Wireless Quantized Federated Learning: A Joint Computation and
Communication Design [36.35684767732552]
In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission.
We jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds.
arXiv Detail & Related papers (2022-03-11T12:30:08Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.