On Arbitrary Compression for Decentralized Consensus and Stochastic
Optimization over Directed Networks
- URL: http://arxiv.org/abs/2204.08160v1
- Date: Mon, 18 Apr 2022 04:41:56 GMT
- Title: On Arbitrary Compression for Decentralized Consensus and Stochastic
Optimization over Directed Networks
- Authors: Mohammad Taha Toghani, C\'esar A. Uribe
- Abstract summary: We propose an iterative-based algorithm that compresses messages according to a desired compression ratio.
Contrary to existing literature allow arbitrary compression ratios.
We show explicit convergence rates for decentralized optimization problems on smooth functions.
- Score: 0.6526824510982799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the decentralized consensus and stochastic optimization problems
with compressed communications over static directed graphs. We propose an
iterative gradient-based algorithm that compresses messages according to a
desired compression ratio. The proposed method provably reduces the
communication overhead on the network at every communication round. Contrary to
existing literature, we allow for arbitrary compression ratios in the
communicated messages. We show a linear convergence rate for the proposed
method on the consensus problem. Moreover, we provide explicit convergence
rates for decentralized stochastic optimization problems on smooth functions
that are either (i) strongly convex, (ii) convex, or (iii) non-convex. Finally,
we provide numerical experiments to illustrate convergence under arbitrary
compression ratios and the communication efficiency of our algorithm.
Related papers
- Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance [55.01966743652196]
We propose a novel algorithm for distributed gradient descent (SGD) with compressed gradient communication in the parameter-server framework.
Our gradient compression technique, named flattened one-bit gradient descent (FO-SGD), relies on two simple algorithmic ideas.
arXiv Detail & Related papers (2024-05-17T21:17:27Z) - Improving the Worst-Case Bidirectional Communication Complexity for Nonconvex Distributed Optimization under Function Similarity [92.1840862558718]
We introduce MARINA-P, a novel method for downlink compression, employing a collection of correlated compressors.
We show that MARINA-P with permutation compressors can achieve a server-to-worker communication complexity improving with the number of workers.
We introduce M3, a method combining MARINA-P with uplink compression and a momentum step, achieving bidirectional compression with provable improvements in total communication complexity as the number of workers increases.
arXiv Detail & Related papers (2024-02-09T13:58:33Z) - Lower Bounds and Accelerated Algorithms in Distributed Stochastic
Optimization with Communication Compression [31.107056382542417]
Communication compression is an essential strategy for alleviating communication overhead.
We propose NEOLITHIC, a nearly optimal algorithm for compression under mild conditions.
arXiv Detail & Related papers (2023-05-12T17:02:43Z) - Acceleration in Distributed Optimization Under Similarity [72.54787082152278]
We study distributed (strongly convex) optimization problems over a network of agents, with no centralized nodes.
An $varepsilon$-solution is achieved in $tildemathcalrhoObig(sqrtfracbeta/mu (1-)log1/varepsilonbig)$ number of communications steps.
This rate matches (up to poly-log factors) for the first time lower complexity communication bounds of distributed gossip-algorithms applied to the class of problems of interest.
arXiv Detail & Related papers (2021-10-24T04:03:00Z) - Decentralized Composite Optimization with Compression [36.75785129001134]
We study the decentralized composite optimization problem with a potentially non-smooth component.
A convergent underlineDecentralized algorithm with compression, Prox-LEAD, is proposed.
Our theorems indicate that Prox-LEAD works with arbitrary compression precision, and it tremendously reduces the communication cost almost for free.
arXiv Detail & Related papers (2021-08-10T04:54:52Z) - Compressed Gradient Tracking for Decentralized Optimization Over General Directed Networks [17.49477125920901]
We propose two communication efficient decentralized optimization algorithms over a general directed multi-agent network.
The first algorithm combines the gradient tracking Push-Pull method with communication compression.
The second algorithm is a broadcast-like version of CPP (B- CPP) and it also achieves linear convergence rate under the same conditions on the objective functions.
arXiv Detail & Related papers (2021-06-14T08:53:30Z) - A Linearly Convergent Algorithm for Decentralized Optimization: Sending
Less Bits for Free! [72.31332210635524]
Decentralized optimization methods enable on-device training of machine learning models without a central coordinator.
We propose a new randomized first-order method which tackles the communication bottleneck by applying randomized compression operators.
We prove that our method can solve the problems without any increase in the number of communications compared to the baseline.
arXiv Detail & Related papers (2020-11-03T13:35:53Z) - Linear Convergent Decentralized Optimization with Compression [50.44269451541387]
Existing decentralized algorithms with compression mainly focus on compressing DGD-type algorithms.
Motivated by primal-dual algorithms, this paper proposes first underlineLinunderlineEAr convergent.
underlineDecentralized with compression, LEAD.
arXiv Detail & Related papers (2020-07-01T04:35:00Z) - Quantized Decentralized Stochastic Learning over Directed Graphs [52.94011236627326]
We consider a decentralized learning problem where data points are distributed among computing nodes communicating over a directed graph.
As the model size gets large, decentralized learning faces a major bottleneck that is the communication load due to each node transmitting messages (model updates) to its neighbors.
We propose the quantized decentralized learning algorithm over directed graphs that is based on the push-sum algorithm in decentralized consensus optimization.
arXiv Detail & Related papers (2020-02-23T18:25:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.