Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications
- URL: http://arxiv.org/abs/2306.05655v1
- Date: Fri, 9 Jun 2023 03:51:45 GMT
- Title: Communication-Efficient Zeroth-Order Distributed Online Optimization:
Algorithm, Theory, and Applications
- Authors: Ege C. Kaya, M. Berk Sahin and Abolfazl Hashemi
- Abstract summary: This paper focuses on a multi-agent zeroth-order online optimization problem in a federated learning setting for target tracking.
The proposed solution is further analyzed in terms of errors and errors in two relevant applications.
- Score: 9.045332526072828
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This paper focuses on a multi-agent zeroth-order online optimization problem
in a federated learning setting for target tracking. The agents only sense
their current distances to their targets and aim to maintain a minimum safe
distance from each other to prevent collisions. The coordination among the
agents and dissemination of collision-prevention information is managed by a
central server using the federated learning paradigm. The proposed formulation
leads to an instance of distributed online nonconvex optimization problem that
is solved via a group of communication-constrained agents. To deal with the
communication limitations of the agents, an error feedback-based compression
scheme is utilized for agent-to-server communication. The proposed algorithm is
analyzed theoretically for the general class of distributed online nonconvex
optimization problems. We provide non-asymptotic convergence rates that show
the dominant term is independent of the characteristics of the compression
scheme. Our theoretical results feature a new approach that employs
significantly more relaxed assumptions in comparison to standard literature.
The performance of the proposed solution is further analyzed numerically in
terms of tracking errors and collisions between agents in two relevant
applications.
Related papers
- Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Estimation Network Design framework for efficient distributed optimization [3.3148826359547514]
This paper introduces Estimation Network Design (END), a graph theoretical language for the analysis and design of distributed iterations.
END algorithms can be tuned to exploit the sparsity of specific problem instances, reducing communication overhead and minimizing redundancy.
In particular, we study the sparsity-aware version of many established methods, including ADMM, AugDGM and Push-Sum DGD.
arXiv Detail & Related papers (2024-04-23T17:59:09Z) - Decentralized Learning Strategies for Estimation Error Minimization with Graph Neural Networks [94.2860766709971]
We address the challenge of sampling and remote estimation for autoregressive Markovian processes in a wireless network with statistically-identical agents.
Our goal is to minimize time-average estimation error and/or age of information with decentralized scalable sampling and transmission policies.
arXiv Detail & Related papers (2024-04-04T06:24:11Z) - Compressed Regression over Adaptive Networks [58.79251288443156]
We derive the performance achievable by a network of distributed agents that solve, adaptively and in the presence of communication constraints, a regression problem.
We devise an optimized allocation strategy where the parameters necessary for the optimization can be learned online by the agents.
arXiv Detail & Related papers (2023-04-07T13:41:08Z) - Distributed Adaptive Learning Under Communication Constraints [54.22472738551687]
This work examines adaptive distributed learning strategies designed to operate under communication constraints.
We consider a network of agents that must solve an online optimization problem from continual observation of streaming data.
arXiv Detail & Related papers (2021-12-03T19:23:48Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - DESTRESS: Computation-Optimal and Communication-Efficient Decentralized
Nonconvex Finite-Sum Optimization [43.31016937305845]
Internet-of-things, networked sensing, autonomous systems and federated learning call for decentralized algorithms for finite-sum optimizations.
We develop DEcentralized STochastic REcurSive methodDESTRESS for non finite-sum optimization.
Detailed theoretical and numerical comparisons show that DESTRESS improves upon prior decentralized algorithms.
arXiv Detail & Related papers (2021-10-04T03:17:41Z) - On Accelerating Distributed Convex Optimizations [0.0]
This paper studies a distributed multi-agent convex optimization problem.
We show that the proposed algorithm converges linearly with an improved rate of convergence than the traditional and adaptive gradient-descent methods.
We demonstrate our algorithm's superior performance compared to prominent distributed algorithms for solving real logistic regression problems.
arXiv Detail & Related papers (2021-08-19T13:19:54Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.