Near-Optimal Sparse Allreduce for Distributed Deep Learning
- URL: http://arxiv.org/abs/2201.07598v1
- Date: Wed, 19 Jan 2022 13:56:57 GMT
- Title: Near-Optimal Sparse Allreduce for Distributed Deep Learning
- Authors: Shigang Li, Torsten Hoefler
- Abstract summary: Communication overhead is one of the major obstacles to train large deep learning models at scale.
This paper proposes O$k$-Top$k$, a scheme for distributed training with sparse gradients.
- Score: 18.99898181586806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication overhead is one of the major obstacles to train large deep
learning models at scale. Gradient sparsification is a promising technique to
reduce the communication volume. However, it is very challenging to obtain real
performance improvement because of (1) the difficulty of achieving an scalable
and efficient sparse allreduce algorithm and (2) the sparsification overhead.
This paper proposes O$k$-Top$k$, a scheme for distributed training with sparse
gradients. O$k$-Top$k$ integrates a novel sparse allreduce algorithm (less than
6$k$ communication volume which is asymptotically optimal) with the
decentralized parallel Stochastic Gradient Descent (SGD) optimizer, and its
convergence is proved. To reduce the sparsification overhead, O$k$-Top$k$
efficiently selects the top-$k$ gradient values according to an estimated
threshold. Evaluations are conducted on the Piz Daint supercomputer with neural
network models from different deep learning domains. Empirical results show
that O$k$-Top$k$ achieves similar model accuracy to dense allreduce. Compared
with the optimized dense and the state-of-the-art sparse allreduces,
O$k$-Top$k$ is more scalable and significantly improves training throughput
(e.g., 3.29x-12.95x improvement for BERT on 256 GPUs).
Related papers
- Differential Private Stochastic Optimization with Heavy-tailed Data: Towards Optimal Rates [15.27596975662702]
We explore algorithms achieving optimal rates of DP optimization with heavy-tailed gradients.
Our results match the minimax lower bound in citekamath2022, indicating that the theoretical limit of convex optimization under DP is achievable.
arXiv Detail & Related papers (2024-08-19T11:07:05Z) - Memory-Efficient Gradient Unrolling for Large-Scale Bi-level Optimization [71.35604981129838]
Traditional gradient-based bi-level optimization algorithms are ill-suited to meet the demands of large-scale applications.
We introduce $(textFG)2textU$, which achieves an unbiased approximation of the meta gradient for bi-level optimization.
$(textFG)2textU$ is inherently designed to support parallel computing, enabling it to effectively leverage large-scale distributed computing systems.
arXiv Detail & Related papers (2024-06-20T08:21:52Z) - Stochastic Constrained Decentralized Optimization for Machine Learning with Fewer Data Oracles: a Gradient Sliding Approach [32.36073823372713]
In machine-learning models, the algorithm has to communicate to the data center and sample data for its gradient.
This gives rise to the need for a decentralized optimization algorithm that is communication-efficient and minimizes the number of gradient computations.
We propose a primal-dual sliding with conditional gradient sliding framework, which is communication-efficient and achieves an $varepsilon$-approximate solution.
arXiv Detail & Related papers (2024-04-03T06:55:59Z) - Adaptive Federated Minimax Optimization with Lower Complexities [82.51223883622552]
We propose an efficient adaptive minimax optimization algorithm (i.e., AdaFGDA) to solve these minimax problems.
It builds our momentum-based reduced and localSGD techniques, and it flexibly incorporate various adaptive learning rates.
arXiv Detail & Related papers (2022-11-14T12:32:18Z) - Communication-Efficient Adam-Type Algorithms for Distributed Data Mining [93.50424502011626]
We propose a class of novel distributed Adam-type algorithms (emphi.e., SketchedAMSGrad) utilizing sketching.
Our new algorithm achieves a fast convergence rate of $O(frac1sqrtnT + frac1(k/d)2 T)$ with the communication cost of $O(k log(d))$ at each iteration.
arXiv Detail & Related papers (2022-10-14T01:42:05Z) - Rethinking gradient sparsification as total error minimization [0.0]
Gradient compression is a widely-established remedy to tackle the communication bottleneck in distributed training training networks (DNNs)
We argue that the benefits of gradient sparsification, especially for DNNs, are necessary -- one that moves from perit optimality to consider optimality for the entire training.
arXiv Detail & Related papers (2021-08-02T14:52:42Z) - Effective Model Sparsification by Scheduled Grow-and-Prune Methods [73.03533268740605]
We propose a novel scheduled grow-and-prune (GaP) methodology without pre-training the dense models.
Experiments have shown that such models can match or beat the quality of highly optimized dense models at 80% sparsity on a variety of tasks.
arXiv Detail & Related papers (2021-06-18T01:03:13Z) - On the Benefits of Multiple Gossip Steps in Communication-Constrained
Decentralized Optimization [29.42301299741866]
We show that having $O(logfrac1epsilon)$ iterations with constant step size - $O(logfrac1epsilon)$ - enables convergence to within $epsilon$ of the optimal value for smooth non- compressed gradient objectives.
To our knowledge, this is the first work that derives the convergence results for non optimization under compressed communication compression parameters.
arXiv Detail & Related papers (2020-11-20T21:17:32Z) - Sparse Communication for Training Deep Networks [56.441077560085475]
Synchronous gradient descent (SGD) is the most common method used for distributed training of deep learning models.
In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers.
We study several compression schemes and identify how three key parameters affect the performance.
arXiv Detail & Related papers (2020-09-19T17:28:11Z) - Gradient Free Minimax Optimization: Variance Reduction and Faster
Convergence [120.9336529957224]
In this paper, we denote the non-strongly setting on the magnitude of a gradient-free minimax optimization problem.
We show that a novel zeroth-order variance reduced descent algorithm achieves the best known query complexity.
arXiv Detail & Related papers (2020-06-16T17:55:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.