An Optimization Framework for Federated Edge Learning
- URL: http://arxiv.org/abs/2111.13526v1
- Date: Fri, 26 Nov 2021 14:47:32 GMT
- Title: An Optimization Framework for Federated Edge Learning
- Authors: Yangchen Li, Ying Cui, and Vincent Lau
- Abstract summary: This paper considers edge computing system where server and workers have possibly different computing and communication capabilities.
We first present a general FL algorithm, namely GenQSGD, parameterized by the numbers of global and local iterations, mini-batch size, and step size sequence.
- Score: 11.007444733506714
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The optimal design of federated learning (FL) algorithms for solving general
machine learning (ML) problems in practical edge computing systems with
quantized message passing remains an open problem. This paper considers an edge
computing system where the server and workers have possibly different computing
and communication capabilities and employ quantization before transmitting
messages. To explore the full potential of FL in such an edge computing system,
we first present a general FL algorithm, namely GenQSGD, parameterized by the
numbers of global and local iterations, mini-batch size, and step size
sequence. Then, we analyze its convergence for an arbitrary step size sequence
and specify the convergence results under three commonly adopted step size
rules, namely the constant, exponential, and diminishing step size rules. Next,
we optimize the algorithm parameters to minimize the energy cost under the time
constraint and convergence error constraint, with the focus on the overall
implementing process of FL. Specifically, for any given step size sequence
under each considered step size rule, we optimize the numbers of global and
local iterations and mini-batch size to optimally implement FL for applications
with preset step size sequences. We also optimize the step size sequence along
with these algorithm parameters to explore the full potential of FL. The
resulting optimization problems are challenging non-convex problems with
non-differentiable constraint functions. We propose iterative algorithms to
obtain KKT points using general inner approximation (GIA) and tricks for
solving complementary geometric programming (CGP). Finally, we numerically
demonstrate the remarkable gains of GenQSGD with optimized algorithm parameters
over existing FL algorithms and reveal the significance of optimally designing
general FL algorithms.
Related papers
- Federated Conditional Stochastic Optimization [110.513884892319]
Conditional optimization has found in a wide range of machine learning tasks, such as in-variant learning tasks, AUPRC, andAML.
This paper proposes algorithms for distributed federated learning.
arXiv Detail & Related papers (2023-10-04T01:47:37Z) - Accelerating Cutting-Plane Algorithms via Reinforcement Learning
Surrogates [49.84541884653309]
A current standard approach to solving convex discrete optimization problems is the use of cutting-plane algorithms.
Despite the existence of a number of general-purpose cut-generating algorithms, large-scale discrete optimization problems continue to suffer from intractability.
We propose a method for accelerating cutting-plane algorithms via reinforcement learning.
arXiv Detail & Related papers (2023-07-17T20:11:56Z) - GQFedWAvg: Optimization-Based Quantized Federated Learning in General
Edge Computing Systems [11.177402054314674]
The optimal implementation of federated learning (FL) in practical edge computing has been an outstanding problem.
We propose an optimization quantized FL algorithm, which can appropriately fit a general edge computing system with uniform or nonuniform computing and communication systems.
arXiv Detail & Related papers (2023-06-13T02:18:24Z) - Iterative-Free Quantum Approximate Optimization Algorithm Using Neural
Networks [20.051757447006043]
We propose a practical method that uses a simple, fully connected neural network to find better parameters tailored to a new given problem instance.
Our method is consistently the fastest to converge while also the best final result.
arXiv Detail & Related papers (2022-08-21T14:05:11Z) - Optimization-Based GenQSGD for Federated Edge Learning [12.371264770814097]
We present a generalized parallel mini-batch convergence descent (SGD) algorithm for federated learning (FL)
We optimize the algorithm parameters to minimize the energy cost under the time convergence error.
Results demonstrate the significant gains over existing FL algorithms.
arXiv Detail & Related papers (2021-10-25T14:25:11Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - Quantum constraint learning for quantum approximate optimization
algorithm [0.0]
This paper introduces a quantum machine learning approach to learn the mixer Hamiltonian required to hard constrain the search subspace.
One can directly plug the learnt unitary into the QAOA framework using an adaptable ansatz.
We also develop an intuitive metric that uses Wasserstein distance to assess the performance of general approximate optimization algorithms with/without constraints.
arXiv Detail & Related papers (2021-05-14T11:31:14Z) - Adaptive pruning-based optimization of parameterized quantum circuits [62.997667081978825]
Variisy hybrid quantum-classical algorithms are powerful tools to maximize the use of Noisy Intermediate Scale Quantum devices.
We propose a strategy for such ansatze used in variational quantum algorithms, which we call "Efficient Circuit Training" (PECT)
Instead of optimizing all of the ansatz parameters at once, PECT launches a sequence of variational algorithms.
arXiv Detail & Related papers (2020-10-01T18:14:11Z) - Iterative Algorithm Induced Deep-Unfolding Neural Networks: Precoding
Design for Multiuser MIMO Systems [59.804810122136345]
We propose a framework for deep-unfolding, where a general form of iterative algorithm induced deep-unfolding neural network (IAIDNN) is developed.
An efficient IAIDNN based on the structure of the classic weighted minimum mean-square error (WMMSE) iterative algorithm is developed.
We show that the proposed IAIDNN efficiently achieves the performance of the iterative WMMSE algorithm with reduced computational complexity.
arXiv Detail & Related papers (2020-06-15T02:57:57Z) - Convergence of adaptive algorithms for weakly convex constrained
optimization [59.36386973876765]
We prove the $mathcaltilde O(t-1/4)$ rate of convergence for the norm of the gradient of Moreau envelope.
Our analysis works with mini-batch size of $1$, constant first and second order moment parameters, and possibly smooth optimization domains.
arXiv Detail & Related papers (2020-06-11T17:43:19Z) - Adaptivity of Stochastic Gradient Methods for Nonconvex Optimization [71.03797261151605]
Adaptivity is an important yet under-studied property in modern optimization theory.
Our algorithm is proved to achieve the best-available convergence for non-PL objectives simultaneously while outperforming existing algorithms for PL objectives.
arXiv Detail & Related papers (2020-02-13T05:42:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.