Decentralized Multi-Task Online Convex Optimization Under Random Link
Failures
- URL: http://arxiv.org/abs/2401.02011v1
- Date: Thu, 4 Jan 2024 00:57:33 GMT
- Title: Decentralized Multi-Task Online Convex Optimization Under Random Link
Failures
- Authors: Wenjing Yan and Xuanyu Cao
- Abstract summary: We develop a robust decentralized saddle-point algorithm against random link failures with heterogeneous probabilities.
We extend our algorithm and analysis to the two-point bandit feedback scenario.
- Score: 5.513958040574729
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Decentralized optimization methods often entail information exchange between
neighbors. Transmission failures can happen due to network congestion,
hardware/software issues, communication outage, and other factors. In this
paper, we investigate the random link failure problem in decentralized
multi-task online convex optimization, where agents have individual decisions
that are coupled with each other via pairwise constraints. Although widely used
in constrained optimization, conventional saddle-point algorithms are not
directly applicable here because of random packet dropping. To address this
issue, we develop a robust decentralized saddle-point algorithm against random
link failures with heterogeneous probabilities by replacing the missing
decisions of neighbors with their latest received values. Then, by judiciously
bounding the accumulated deviation stemming from this replacement, we first
establish that our algorithm achieves $\mathcal{O}(\sqrt{T})$ regret and
$\mathcal{O}(T^\frac{3}{4})$ constraint violations for the full information
scenario, where the complete information on the local cost function is revealed
to each agent at the end of each time slot. These two bounds match, in order
sense, the performance bounds of algorithms with perfect communications.
Further, we extend our algorithm and analysis to the two-point bandit feedback
scenario, where only the values of the local cost function at two random points
are disclosed to each agent sequentially. Performance bounds of the same orders
as the full information case are derived. Finally, we corroborate the efficacy
of the proposed algorithms and the analytical results through numerical
simulations.
Related papers
- Lower Bounds and Optimal Algorithms for Non-Smooth Convex Decentralized Optimization over Time-Varying Networks [57.24087627267086]
We consider the task of minimizing the sum of convex functions stored in a decentralized manner across the nodes of a communication network.
Lower bounds on the number of decentralized communications and (sub)gradient computations required to solve the problem have been established.
We develop the first optimal algorithm that matches these lower bounds and offers substantially improved theoretical performance compared to the existing state of the art.
arXiv Detail & Related papers (2024-05-28T10:28:45Z) - Adversarially Robust Distributed Count Tracking via Partial Differential
Privacy [17.43748116766233]
We study the distributed tracking model, also known as distributed functional monitoring.
This model involves $k$ sites each receiving a stream of items and communicating with the central server.
For count tracking, it is known that there is a $sqrtk$ gap in communication between deterministic and randomized algorithms.
arXiv Detail & Related papers (2023-11-01T07:42:13Z) - Best of Both Worlds Guarantees for Smoothed Online Quadratic Optimization [9.449153668916098]
We study the smoothed online optimization (SOQO) problem where, at each round $t$, a player plays an action $x_t in response to a quadratic hitting cost and an additional squared $ell$-norm cost for switching actions.
This problem class has strong connections to a wide range of application domains including smart grid management, adaptive control, and data center management.
We present a best-of-both-worlds algorithm that obtains a robust adversarial performance while simultaneously achieving a near-optimal performance.
arXiv Detail & Related papers (2023-10-31T22:59:23Z) - Distributed Online Private Learning of Convex Nondecomposable Objectives [7.5585719185840485]
We deal with a general distributed constrained online learning problem with privacy over time-varying networks.
We propose two algorithms, named as DPSDA-C and DPSDA-PS, under this framework.
Theoretical results show that both algorithms attain an expected regret upper bound in $mathcalO( sqrtT )$ when the objective function is convex.
arXiv Detail & Related papers (2022-06-16T06:29:51Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Lower Bounds and Optimal Algorithms for Smooth and Strongly Convex
Decentralized Optimization Over Time-Varying Networks [79.16773494166644]
We consider the task of minimizing the sum of smooth and strongly convex functions stored in a decentralized manner across the nodes of a communication network.
We design two optimal algorithms that attain these lower bounds.
We corroborate the theoretical efficiency of these algorithms by performing an experimental comparison with existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-08T15:54:44Z) - Navigating to the Best Policy in Markov Decision Processes [68.8204255655161]
We investigate the active pure exploration problem in Markov Decision Processes.
Agent sequentially selects actions and, from the resulting system trajectory, aims at the best as fast as possible.
arXiv Detail & Related papers (2021-06-05T09:16:28Z) - Byzantine-Resilient Non-Convex Stochastic Gradient Descent [61.6382287971982]
adversary-resilient distributed optimization, in which.
machines can independently compute gradients, and cooperate.
Our algorithm is based on a new concentration technique, and its sample complexity.
It is very practical: it improves upon the performance of all prior methods when no.
setting machines are present.
arXiv Detail & Related papers (2020-12-28T17:19:32Z) - An Efficient Algorithm for Cooperative Semi-Bandits [0.0]
We introduce Coop-FTPL, a cooperative version of the well-known Follow The Perturbed Leader algorithm.
We show that the expected regret of our algorithm after T time steps is of order QT log(k)(k$alpha$ 1 /Q + m), where Q is the total activation probability mass.
arXiv Detail & Related papers (2020-10-05T07:08:26Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.