FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
- URL: http://arxiv.org/abs/2406.18156v1
- Date: Wed, 26 Jun 2024 08:14:23 GMT
- Title: FedAQ: Communication-Efficient Federated Edge Learning via Joint Uplink and Downlink Adaptive Quantization
- Authors: Linping Qu, Shenghui Song, Chi-Ying Tsui,
- Abstract summary: Federated learning (FL) is a powerful machine learning paradigm which leverages the data as well as the computational resources of clients, while protecting clients' data privacy.
Previous research has primarily focused on the uplink communication, employing either fixed-bit quantization or adaptive quantization methods.
In this work, we introduce a holistic approach by joint uplink and downlink adaptive quantization to reduce the communication overhead.
- Score: 11.673528138087244
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a powerful machine learning paradigm which leverages the data as well as the computational resources of clients, while protecting clients' data privacy. However, the substantial model size and frequent aggregation between the server and clients result in significant communication overhead, making it challenging to deploy FL in resource-limited wireless networks. In this work, we aim to mitigate the communication overhead by using quantization. Previous research on quantization has primarily focused on the uplink communication, employing either fixed-bit quantization or adaptive quantization methods. In this work, we introduce a holistic approach by joint uplink and downlink adaptive quantization to reduce the communication overhead. In particular, we optimize the learning convergence by determining the optimal uplink and downlink quantization bit-length, with a communication energy constraint. Theoretical analysis shows that the optimal quantization levels depend on the range of model gradients or weights. Based on this insight, we propose a decreasing-trend quantization for the uplink and an increasing-trend quantization for the downlink, which aligns with the change of the model parameters during the training process. Experimental results show that, the proposed joint uplink and downlink adaptive quantization strategy can save up to 66.7% energy compared with the existing schemes.
Related papers
- Clipped Uniform Quantizers for Communication-Efficient Federated Learning [3.38220960870904]
This paper introduces an approach to employ clipped uniform quantization in federated learning settings.
By employing optimal clipping thresholds and adaptive quantization schemes, our method significantly curtails the bit requirements for model weight transmissions.
arXiv Detail & Related papers (2024-05-22T05:48:25Z) - Communication-Efficient Federated Learning through Adaptive Weight
Clustering and Server-Side Distillation [10.541541376305245]
Federated Learning (FL) is a promising technique for the collaborative training of deep neural networks across multiple devices.
FL is hindered by excessive communication costs due to repeated server-client communication during training.
We propose FedCompress, a novel approach that combines dynamic weight clustering and server-side knowledge distillation.
arXiv Detail & Related papers (2024-01-25T14:49:15Z) - Entangled Pair Resource Allocation under Uncertain Fidelity Requirements [59.83361663430336]
In quantum networks, effective entanglement routing facilitates communication between quantum source and quantum destination nodes.
We propose a resource allocation model for entangled pairs and an entanglement routing model with a fidelity guarantee.
Our proposed model can reduce the total cost by at least 20% compared to the baseline model.
arXiv Detail & Related papers (2023-04-10T07:16:51Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - FedDQ: Communication-Efficient Federated Learning with Descending
Quantization [5.881154276623056]
Federated learning (FL) is an emerging privacy-preserving distributed learning scheme.
FL suffers from critical communication bottleneck due to large model size and frequent model aggregation.
This paper proposes an opposite approach to do adaptive quantization.
arXiv Detail & Related papers (2021-10-05T18:56:28Z) - Entanglement Rate Optimization in Heterogeneous Quantum Communication
Networks [79.8886946157912]
Quantum communication networks are emerging as a promising technology that could constitute a key building block in future communication networks in the 6G era and beyond.
Recent advances led to the deployment of small- and large-scale quantum communication networks with real quantum hardware.
In quantum networks, entanglement is a key resource that allows for data transmission between different nodes.
arXiv Detail & Related papers (2021-05-30T11:34:23Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Design and Analysis of Uplink and Downlink Communications for Federated
Learning [18.634770589573733]
Communication has been known to be one of the primary bottlenecks of federated learning (FL)
We focus on the design and analysis of physical layer quantization and transmission methods for wireless FL.
arXiv Detail & Related papers (2020-12-07T21:01:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.