Compressed Communication for Distributed Training: Adaptive Methods and
System
- URL: http://arxiv.org/abs/2105.07829v1
- Date: Mon, 17 May 2021 13:41:47 GMT
- Title: Compressed Communication for Distributed Training: Adaptive Methods and
System
- Authors: Yuchen Zhong, Cong Xie, Shuai Zheng, Haibin Lin
- Abstract summary: Communication overhead severely hinders the scalability of distributed machine learning systems.
Recently, there has been a growing interest in using gradient compression to reduce the communication overhead.
In this paper, we first introduce a novel adaptive gradient method with gradient compression.
- Score: 13.244482588437972
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication overhead severely hinders the scalability of distributed
machine learning systems. Recently, there has been a growing interest in using
gradient compression to reduce the communication overhead of the distributed
training. However, there is little understanding of applying gradient
compression to adaptive gradient methods. Moreover, its performance benefits
are often limited by the non-negligible compression overhead. In this paper, we
first introduce a novel adaptive gradient method with gradient compression. We
show that the proposed method has a convergence rate of
$\mathcal{O}(1/\sqrt{T})$ for non-convex problems. In addition, we develop a
scalable system called BytePS-Compress for two-way compression, where the
gradients are compressed in both directions between workers and parameter
servers. BytePS-Compress pipelines the compression and decompression on CPUs
and achieves a high degree of parallelism. Empirical evaluations show that we
improve the training time of ResNet50, VGG16, and BERT-base by 5.0%, 58.1%,
23.3%, respectively, without any accuracy loss with 25 Gb/s networking.
Furthermore, for training the BERT models, we achieve a compression rate of
333x compared to the mixed-precision training.
Related papers
- Accelerating Large Language Model Training with Hybrid GPU-based Compression [3.204387803072905]
MPI libraries have been proven to reduce message size significantly and leverage interconnect bandwidth.
We investigate the efficacy of compression-assisted MPI collectives under the context of distributed Large Language Model (LLM) training.
arXiv Detail & Related papers (2024-09-04T04:05:30Z) - Beyond Throughput and Compression Ratios: Towards High End-to-end Utility of Gradient Compression [13.255861297820326]
gradient compression can reduce communicated gradient data volume.
In practice, gradient compression schemes do not achieve acceleration of the training process while also preserving accuracy.
We identify common issues in previous gradient compression systems and evaluation methodologies.
arXiv Detail & Related papers (2024-07-01T15:32:28Z) - Communication-Efficient Federated Learning via Quantized Compressed
Sensing [82.10695943017907]
The presented framework consists of gradient compression for wireless devices and gradient reconstruction for a parameter server.
Thanks to gradient sparsification and quantization, our strategy can achieve a higher compression ratio than one-bit gradient compression.
We demonstrate that the framework achieves almost identical performance with the case that performs no compression.
arXiv Detail & Related papers (2021-11-30T02:13:54Z) - Quantization for Distributed Optimization [0.0]
We present a set of all-reduce gradient compatible compression schemes which significantly reduce the communication overhead while maintaining the performance of vanilla SGD.
Our compression methods perform better than the in-built methods currently offered by the deep learning frameworks.
arXiv Detail & Related papers (2021-09-26T05:16:12Z) - CD-SGD: Distributed Stochastic Gradient Descent with Compression and
Delay Compensation [3.0786359925181315]
Communication overhead is the key challenge for distributed computation training.
gradient compression technique can greatly alleviate the impact of communication overhead.
However, gradient compression brings in extra cost, which will delay the next training iteration.
arXiv Detail & Related papers (2021-06-21T01:15:12Z) - Towards Compact CNNs via Collaborative Compression [166.86915086497433]
We propose a Collaborative Compression scheme, which joints channel pruning and tensor decomposition to compress CNN models.
We achieve 52.9% FLOPs reduction by removing 48.4% parameters on ResNet-50 with only a Top-1 accuracy drop of 0.56% on ImageNet 2012.
arXiv Detail & Related papers (2021-05-24T12:07:38Z) - ScaleCom: Scalable Sparsified Gradient Compression for
Communication-Efficient Distributed Training [74.43625662170284]
Large-scale distributed training of Deep Neural Networks (DNNs) on state-of-the-art platforms is expected to be severely communication constrained.
We propose a new compression technique that leverages similarity in the gradient distribution amongst learners to provide significantly improved scalability.
We experimentally demonstrate that ScaleCom has small overheads, directly reduces gradient traffic and provides high compression rates (65-400X) and excellent scalability (up to 64 learners and 8-12X larger batch sizes over standard training) without significant accuracy loss.
arXiv Detail & Related papers (2021-04-21T02:22:10Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - Sparse Communication for Training Deep Networks [56.441077560085475]
Synchronous gradient descent (SGD) is the most common method used for distributed training of deep learning models.
In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers.
We study several compression schemes and identify how three key parameters affect the performance.
arXiv Detail & Related papers (2020-09-19T17:28:11Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.