On Biased Compression for Distributed Learning
- URL: http://arxiv.org/abs/2002.12410v4
- Date: Sun, 14 Jan 2024 16:36:09 GMT
- Title: On Biased Compression for Distributed Learning
- Authors: Aleksandr Beznosikov and Samuel Horv\'ath and Peter Richt\'arik and
Mher Safaryan
- Abstract summary: We show for the first time that biased compressors can lead to linear convergence rates both in the single node and distributed settings.
We propose several new biased compressors with promising theoretical guarantees and practical performance.
- Score: 55.89300593805943
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the last few years, various communication compression techniques have
emerged as an indispensable tool helping to alleviate the communication
bottleneck in distributed learning. However, despite the fact biased
compressors often show superior performance in practice when compared to the
much more studied and understood unbiased compressors, very little is known
about them. In this work we study three classes of biased compression
operators, two of which are new, and their performance when applied to
(stochastic) gradient descent and distributed (stochastic) gradient descent. We
show for the first time that biased compressors can lead to linear convergence
rates both in the single node and distributed settings. We prove that
distributed compressed SGD method, employed with error feedback mechanism,
enjoys the ergodic rate $O\left( \delta L \exp \left[-\frac{\mu K}{\delta
L}\right] + \frac{(C + \delta D)}{K\mu}\right)$, where $\delta\ge 1$ is a
compression parameter which grows when more compression is applied, $L$ and
$\mu$ are the smoothness and strong convexity constants, $C$ captures
stochastic gradient noise ($C=0$ if full gradients are computed on each node)
and $D$ captures the variance of the gradients at the optimum ($D=0$ for
over-parameterized models). Further, via a theoretical study of several
synthetic and empirical distributions of communicated gradients, we shed light
on why and by how much biased compressors outperform their unbiased variants.
Finally, we propose several new biased compressors with promising theoretical
guarantees and practical performance.
Related papers
- Flattened one-bit stochastic gradient descent: compressed distributed optimization with controlled variance [55.01966743652196]
We propose a novel algorithm for distributed gradient descent (SGD) with compressed gradient communication in the parameter-server framework.
Our gradient compression technique, named flattened one-bit gradient descent (FO-SGD), relies on two simple algorithmic ideas.
arXiv Detail & Related papers (2024-05-17T21:17:27Z) - DiffRate : Differentiable Compression Rate for Efficient Vision
Transformers [98.33906104846386]
Token compression aims to speed up large-scale vision transformers (e.g. ViTs) by pruning (dropping) or merging tokens.
DiffRate is a novel token compression method that has several appealing properties prior arts do not have.
arXiv Detail & Related papers (2023-05-29T10:15:19Z) - EF-BV: A Unified Theory of Error Feedback and Variance Reduction
Mechanisms for Biased and Unbiased Compression in Distributed Optimization [7.691755449724637]
In distributed or federated optimization and learning, communication between the different computing units is often the bottleneck.
There are two classes of compression operators and separate algorithms making use of them.
We propose a new algorithm, recovering DIANA and EF21 as particular cases.
arXiv Detail & Related papers (2022-05-09T10:44:23Z) - Unified Multivariate Gaussian Mixture for Efficient Neural Image
Compression [151.3826781154146]
latent variables with priors and hyperpriors is an essential problem in variational image compression.
We find inter-correlations and intra-correlations exist when observing latent variables in a vectorized perspective.
Our model has better rate-distortion performance and an impressive $3.18times$ compression speed up.
arXiv Detail & Related papers (2022-03-21T11:44:17Z) - Distributed Methods with Absolute Compression and Error Compensation [1.52292571922932]
Communication compression is a powerful approach to alleviating this issue.
In this paper, we generalize the analysis of EC-SGD with absolute compression to the arbitrary sampling strategy.
Our rates improve upon the previously known ones in this setting.
arXiv Detail & Related papers (2022-03-04T15:41:14Z) - Permutation Compressors for Provably Faster Distributed Nonconvex
Optimization [68.8204255655161]
We show that the MARINA method of Gorbunov et al (2021) can be considered as a state-of-the-art method in terms of theoretical communication complexity.
Theory of MARINA to support the theory of potentially em correlated compressors, extends to the method beyond the classical independent compressors setting.
arXiv Detail & Related papers (2021-10-07T09:38:15Z) - Escaping Saddle Points with Compressed SGD [8.014396597444296]
gradient descent (SGD) is a prevalent optimization technique for large-scale distributed machine learning.
We show that SGD augmented with gradient compression converges to an $varepsilon$-first-order stationary point.
When the gradient is not Lipschitz, SGD with RandomK compressor converges to an $varepsilon$-SOSP with the same number of iterations as SGD.
arXiv Detail & Related papers (2021-05-21T01:56:43Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.