On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks
- URL: http://arxiv.org/abs/2111.07911v2
- Date: Wed, 17 Nov 2021 16:25:44 GMT
- Title: On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks
- Authors: Minsu Kim, Walid Saad, Mohammad Mozaffari, and Merouane Debbah
- Abstract summary: Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
- Score: 68.52621234990728
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Deploying federated learning (FL) over wireless networks with
resource-constrained devices requires balancing between accuracy, energy
efficiency, and precision. Prior art on FL often requires devices to train deep
neural networks (DNNs) using a 32-bit precision level for data representation
to improve accuracy. However, such algorithms are impractical for
resource-constrained devices since DNNs could require execution of millions of
operations. Thus, training DNNs with a high precision level incurs a high
energy cost for FL. In this paper, a quantized FL framework, that represents
data with a finite level of precision in both local training and uplink
transmission, is proposed. Here, the finite level of precision is captured
through the use of quantized neural networks (QNNs) that quantize weights and
activations in fixed-precision format. In the considered FL model, each device
trains its QNN and transmits a quantized training result to the base station.
Energy models for the local training and the transmission with the quantization
are rigorously derived. An energy minimization problem is formulated with
respect to the level of precision while ensuring convergence. To solve the
problem, we first analytically derive the FL convergence rate and use a line
search method. Simulation results show that our FL framework can reduce energy
consumption by up to 53% compared to a standard FL model. The results also shed
light on the tradeoff between precision, energy, and accuracy in FL over
wireless networks.
Related papers
- Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design [68.86220939532373]
The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
arXiv Detail & Related papers (2022-07-19T16:37:24Z) - SlimFL: Federated Learning with Superposition Coding over Slimmable
Neural Networks [56.68149211499535]
Federated learning (FL) is a key enabler for efficient communication and computing leveraging devices' distributed computing capabilities.
This paper proposes a novel learning framework by integrating FL and width-adjustable slimmable neural networks (SNNs)
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2022-03-26T15:06:13Z) - FxP-QNet: A Post-Training Quantizer for the Design of Mixed
Low-Precision DNNs with Dynamic Fixed-Point Representation [2.4149105714758545]
We propose a novel framework referred to as the Fixed-Point Quantizer of deep neural Networks (FxP-QNet)
FxP-QNet adapts the quantization level for each data-structure of each layer based on the trade-off between the network accuracy and the low-precision requirements.
Results show that FxP-QNet-quantized AlexNet, VGG-16, and ResNet-18 reduce the overall memory requirements of their full-precision counterparts by 7.16x, 10.36x, and 6.44x with less than 0.95%, 0.95%, and 1.99%
arXiv Detail & Related papers (2022-03-22T23:01:43Z) - Exploring Deep Reinforcement Learning-Assisted Federated Learning for
Online Resource Allocation in EdgeIoT [53.68792408315411]
Federated learning (FL) has been increasingly considered to preserve data training privacy from eavesdropping attacks in mobile edge computing-based Internet of Thing (EdgeIoT)
We propose a new federated learning-enabled twin-delayed deep deterministic policy gradient (FLDLT3) framework to achieve the optimal accuracy and energy balance in a continuous domain.
Numerical results demonstrate that the proposed FL-DLT3 achieves fast convergence (less than 100 iterations) while the FL accuracy-to-energy consumption ratio is improved by 51.8% compared to existing state-of-the-art benchmark.
arXiv Detail & Related papers (2022-02-15T13:36:15Z) - MARViN -- Multiple Arithmetic Resolutions Vacillating in Neural Networks [0.0]
We introduce MARViN, a new quantized training strategy using information theory-based intra-epoch precision switching.
We achieve an average speedup of 1.86 compared to a float32 basis while limiting mean degradation accuracy on AlexNet/ResNet to only -0.075%.
arXiv Detail & Related papers (2021-07-28T16:57:05Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.