GQFedWAvg: Optimization-Based Quantized Federated Learning in General
Edge Computing Systems
- URL: http://arxiv.org/abs/2306.07497v2
- Date: Sat, 18 Nov 2023 11:43:33 GMT
- Title: GQFedWAvg: Optimization-Based Quantized Federated Learning in General
Edge Computing Systems
- Authors: Yangchen Li, Ying Cui, and Vincent Lau
- Abstract summary: The optimal implementation of federated learning (FL) in practical edge computing has been an outstanding problem.
We propose an optimization quantized FL algorithm, which can appropriately fit a general edge computing system with uniform or nonuniform computing and communication systems.
- Score: 11.177402054314674
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimal implementation of federated learning (FL) in practical edge
computing systems has been an outstanding problem. In this paper, we propose an
optimization-based quantized FL algorithm, which can appropriately fit a
general edge computing system with uniform or nonuniform computing and
communication resources at the workers. Specifically, we first present a new
random quantization scheme and analyze its properties. Then, we propose a
general quantized FL algorithm, namely GQFedWAvg. Specifically, GQFedWAvg
applies the proposed quantization scheme to quantize wisely chosen model
update-related vectors and adopts a generalized mini-batch stochastic gradient
descent (SGD) method with the weighted average local model updates in global
model aggregation. Besides, GQFedWAvg has several adjustable algorithm
parameters to flexibly adapt to the computing and communication resources at
the server and workers. We also analyze the convergence of GQFedWAvg. Next, we
optimize the algorithm parameters of GQFedWAvg to minimize the convergence
error under the time and energy constraints. We successfully tackle the
challenging non-convex problem using general inner approximation (GIA) and
multiple delicate tricks. Finally, we interpret GQFedWAvg's function principle
and show its considerable gains over existing FL algorithms using numerical
results.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - AskewSGD : An Annealed interval-constrained Optimisation method to train
Quantized Neural Networks [12.229154524476405]
We develop a new algorithm, Annealed Skewed SGD - AskewSGD - for training deep neural networks (DNNs) with quantized weights.
Unlike algorithms with active sets and feasible directions, AskewSGD avoids projections or optimization under the entire feasible set.
Experimental results show that the AskewSGD algorithm performs better than or on par with state of the art methods in classical benchmarks.
arXiv Detail & Related papers (2022-11-07T18:13:44Z) - On the Convergence to a Global Solution of Shuffling-Type Gradient
Algorithms [18.663264755108703]
gradient descent (SGD) algorithm is the method of choice in many machine learning tasks.
In this paper, we show that SGD has achieved the desired computational general complexity as convex setting.
arXiv Detail & Related papers (2022-06-13T01:25:59Z) - An Optimization Framework for Federated Edge Learning [11.007444733506714]
This paper considers edge computing system where server and workers have possibly different computing and communication capabilities.
We first present a general FL algorithm, namely GenQSGD, parameterized by the numbers of global and local iterations, mini-batch size, and step size sequence.
arXiv Detail & Related papers (2021-11-26T14:47:32Z) - Optimization-Based GenQSGD for Federated Edge Learning [12.371264770814097]
We present a generalized parallel mini-batch convergence descent (SGD) algorithm for federated learning (FL)
We optimize the algorithm parameters to minimize the energy cost under the time convergence error.
Results demonstrate the significant gains over existing FL algorithms.
arXiv Detail & Related papers (2021-10-25T14:25:11Z) - AsySQN: Faster Vertical Federated Learning Algorithms with Better
Computation Resource Utilization [159.75564904944707]
We propose an asynchronous quasi-Newton (AsySQN) framework for vertical federated learning (VFL)
The proposed algorithms make descent steps scaled by approximate without calculating the inverse Hessian matrix explicitly.
We show that the adopted asynchronous computation can make better use of the computation resource.
arXiv Detail & Related papers (2021-09-26T07:56:10Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.