Decentralized Optimization Over the Stiefel Manifold by an Approximate
Augmented Lagrangian Function
- URL: http://arxiv.org/abs/2112.14949v1
- Date: Thu, 30 Dec 2021 07:19:59 GMT
- Title: Decentralized Optimization Over the Stiefel Manifold by an Approximate
Augmented Lagrangian Function
- Authors: Lei Wang, Xin Liu
- Abstract summary: This paper focuses on the decentralized optimization problem over the Stiefel manifold.
Agents can only communicate with their neighbors in a collaborative effort to solve this problem.
In existing methods, multiple rounds of communications are required to guarantee the convergence.
This paper proposes a decentralized algorithm, called DESTINY, which only invokes a single round of communications per iteration.
- Score: 7.768673476492471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we focus on the decentralized optimization problem over the
Stiefel manifold, which is defined on a connected network of $d$ agents. The
objective is an average of $d$ local functions, and each function is privately
held by an agent and encodes its data. The agents can only communicate with
their neighbors in a collaborative effort to solve this problem. In existing
methods, multiple rounds of communications are required to guarantee the
convergence, giving rise to high communication costs. In contrast, this paper
proposes a decentralized algorithm, called DESTINY, which only invokes a single
round of communications per iteration. DESTINY combines gradient tracking
techniques with a novel approximate augmented Lagrangian function. The global
convergence to stationary points is rigorously established. Comprehensive
numerical experiments demonstrate that DESTINY has a strong potential to
deliver a cutting-edge performance in solving a variety of testing problems.
Related papers
- Communication Efficient Decentralization for Smoothed Online Convex Optimization [9.449153668916098]
We study the multi-agent Smoothed Online Convex Optimization (SOCO) problem, where $N$ agents interact through a communication graph.
In each round, each agent $i$ receives a strongly convex hitting cost function $fi_t$ in an online fashion.
Our results hold even when the communication graph changes arbitrarily and adaptively over time.
arXiv Detail & Related papers (2024-11-13T05:59:04Z) - Self-Localized Collaborative Perception [49.86110931859302]
We propose$mathttCoBEVGlue$, a novel self-localized collaborative perception system.
$mathttCoBEVGlue$ is a novel spatial alignment module, which provides the relative poses between agents.
$mathttCoBEVGlue$ achieves state-of-the-art detection performance under arbitrary localization noises and attacks.
arXiv Detail & Related papers (2024-06-18T15:26:54Z) - Decentralized Multi-Task Online Convex Optimization Under Random Link
Failures [5.513958040574729]
We develop a robust decentralized saddle-point algorithm against random link failures with heterogeneous probabilities.
We extend our algorithm and analysis to the two-point bandit feedback scenario.
arXiv Detail & Related papers (2024-01-04T00:57:33Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - Decentralized Riemannian Conjugate Gradient Method on the Stiefel
Manifold [59.73080197971106]
This paper presents a first-order conjugate optimization method that converges faster than the steepest descent method.
It aims to achieve global convergence over the Stiefel manifold.
arXiv Detail & Related papers (2023-08-21T08:02:16Z) - Cooperative Multi-Agent Reinforcement Learning: Asynchronous
Communication and Linear Function Approximation [77.09836892653176]
We study multi-agent reinforcement learning in the setting of episodic Markov decision processes.
We propose a provably efficient algorithm based on value that enable asynchronous communication.
We show that a minimal $Omega(dM)$ communication complexity is required to improve the performance through collaboration.
arXiv Detail & Related papers (2023-05-10T20:29:29Z) - Multi-Agent congestion cost minimization with linear function
approximation [0.0]
This work considers multiple agents traversing a network from a source node to the goal node.
Agents' objective is to find a path to the goal node with minimum overall cost in a decentralized way.
We propose a novel multi-agent congestion cost minimization algorithm.
arXiv Detail & Related papers (2023-01-26T08:45:44Z) - Communication-Efficient Topologies for Decentralized Learning with
$O(1)$ Consensus Rate [35.698182247676414]
Decentralized optimization is an emerging paradigm in distributed learning in which agents achieve network-wide solutions by peer-to-peer communication without the central server.
We show that the total number of iterations to reach a network-wide solution is affected by the speed at which the agents' information is mixed'' by communication.
We propose a new family of topologies, EquiTopo, which has an (almost) constant degree and a network-size-independent consensus rate.
arXiv Detail & Related papers (2022-10-14T15:02:01Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Communication-efficient Decentralized Local SGD over Undirected Networks [2.3572498744567123]
We consider the distributed learning problem where a network of $n$ agents seeks to minimize a global function $F$.
We analyze the trade-off between the number of communication rounds and the computational effort of each agent.
Our results show that by using only $R=Omega(n)$ communication rounds, one can achieve an error that scales as $O(1/nT)$.
arXiv Detail & Related papers (2020-11-06T09:34:00Z) - A Multi-Agent Primal-Dual Strategy for Composite Optimization over
Distributed Features [52.856801164425086]
We study multi-agent sharing optimization problems with the objective function being the sum of smooth local functions plus a convex (possibly non-smooth) coupling function.
arXiv Detail & Related papers (2020-06-15T19:40:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.