Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design
- URL: http://arxiv.org/abs/2207.09387v3
- Date: Tue, 11 Jul 2023 21:52:09 GMT
- Title: Green, Quantized Federated Learning over Wireless Networks: An
Energy-Efficient Design
- Authors: Minsu Kim, Walid Saad, Mohammad Mozaffari, Merouane Debbah
- Abstract summary: The finite precision level is captured through the use of quantized neural networks (QNNs) that quantize weights and activations in fixed-precision format.
The proposed FL framework can reduce energy consumption until convergence by up to 70% compared to a baseline FL algorithm.
- Score: 68.86220939532373
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper, a green-quantized FL framework, which represents data with a
finite precision level in both local training and uplink transmission, is
proposed. Here, the finite precision level is captured through the use of
quantized neural networks (QNNs) that quantize weights and activations in
fixed-precision format. In the considered FL model, each device trains its QNN
and transmits a quantized training result to the base station. Energy models
for the local training and the transmission with quantization are rigorously
derived. To minimize the energy consumption and the number of communication
rounds simultaneously, a multi-objective optimization problem is formulated
with respect to the number of local iterations, the number of selected devices,
and the precision levels for both local training and transmission while
ensuring convergence under a target accuracy constraint. To solve this problem,
the convergence rate of the proposed FL system is analytically derived with
respect to the system control variables. Then, the Pareto boundary of the
problem is characterized to provide efficient solutions using the normal
boundary inspection method. Design insights on balancing the tradeoff between
the two objectives while achieving a target accuracy are drawn from using the
Nash bargaining solution and analyzing the derived convergence rate. Simulation
results show that the proposed FL framework can reduce energy consumption until
convergence by up to 70\% compared to a baseline FL algorithm that represents
data with full precision without damaging the convergence rate.
Related papers
- Semi-Federated Learning: Convergence Analysis and Optimization of A
Hybrid Learning Framework [70.83511997272457]
We propose a semi-federated learning (SemiFL) paradigm to leverage both the base station (BS) and devices for a hybrid implementation of centralized learning (CL) and FL.
We propose a two-stage algorithm to solve this intractable problem, in which we provide the closed-form solutions to the beamformers.
arXiv Detail & Related papers (2023-10-04T03:32:39Z) - Scaling Limits of Quantum Repeater Networks [62.75241407271626]
Quantum networks (QNs) are a promising platform for secure communications, enhanced sensing, and efficient distributed quantum computing.
Due to the fragile nature of quantum states, these networks face significant challenges in terms of scalability.
In this paper, the scaling limits of quantum repeater networks (QRNs) are analyzed.
arXiv Detail & Related papers (2023-05-15T14:57:01Z) - Optimal Privacy Preserving for Federated Learning in Mobile Edge
Computing [35.57643489979182]
Federated Learning (FL) with quantization and deliberately added noise over wireless networks is a promising approach to preserve user differential privacy (DP)
This article aims to jointly optimize the quantization and Binomial mechanism parameters and communication resources to maximize the convergence rate under the constraints of the wireless network and DP requirement.
arXiv Detail & Related papers (2022-11-14T07:54:14Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Wireless Quantized Federated Learning: A Joint Computation and
Communication Design [36.35684767732552]
In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission.
We jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds.
arXiv Detail & Related papers (2022-03-11T12:30:08Z) - On the Tradeoff between Energy, Precision, and Accuracy in Federated
Quantized Neural Networks [68.52621234990728]
Federated learning (FL) over wireless networks requires balancing between accuracy, energy efficiency, and precision.
We propose a quantized FL framework that represents data with a finite level of precision in both local training and uplink transmission.
Our framework can reduce energy consumption by up to 53% compared to a standard FL model.
arXiv Detail & Related papers (2021-11-15T17:00:03Z) - Efficient training of physics-informed neural networks via importance
sampling [2.9005223064604078]
Physics-In Neural Networks (PINNs) are a class of deep neural networks that are trained to compute systems governed by partial differential equations (PDEs)
We show that an importance sampling approach will improve the convergence behavior of PINNs training.
arXiv Detail & Related papers (2021-04-26T02:45:10Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.