Efficient Distributed Auto-Differentiation
- URL: http://arxiv.org/abs/2102.09631v1
- Date: Thu, 18 Feb 2021 21:46:27 GMT
- Title: Efficient Distributed Auto-Differentiation
- Authors: Bradley T. Baker, Vince D. Calhoun, Barak Pearlmutter, Sergey M. Plis
- Abstract summary: gradient-based algorithms for training large deep neural networks (DNNs) are communication-heavy.
We introduce a surprisingly simple statistic for training distributed DNNs that is more communication-friendly than the gradient.
The process provides the flexibility of averaging gradients during backpropagation, enabling novel flexible training schemas.
- Score: 22.192220404846267
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Although distributed machine learning has opened up numerous frontiers of
research, the separation of large models across different devices, nodes, and
sites can invite significant communication overhead, making reliable training
difficult.
The focus on gradients as the primary shared statistic during training has
led to a number of intuitive algorithms for distributed deep learning; however,
gradient-based algorithms for training large deep neural networks (DNNs) are
communication-heavy, often requiring additional modifications via sparsity
constraints, compression, quantization, and other similar approaches, to lower
bandwidth.
We introduce a surprisingly simple statistic for training distributed DNNs
that is more communication-friendly than the gradient. The error
backpropagation process can be modified to share these smaller intermediate
values instead of the gradient, reducing communication overhead with no impact
on accuracy. The process provides the flexibility of averaging gradients during
backpropagation, enabling novel flexible training schemas while leaving room
for further bandwidth reduction via existing gradient compression methods.
Finally, consideration of the matrices used to compute the gradient inspires a
new approach to compression via structured power iterations, which can not only
reduce bandwidth but also enable introspection into distributed training
dynamics, without significant performance loss.
Related papers
- FLARE: Detection and Mitigation of Concept Drift for Federated Learning
based IoT Deployments [2.7776688429637466]
FLARE is a lightweight dual-scheduler FL framework that conditionally transfers training data and deploys models between edge and sensor endpoints.
We show that FLARE can significantly reduce the amount of data exchanged between edge and sensor nodes compared to fixed-interval scheduling methods.
It can successfully detect concept drift reactively with at least a 16x reduction in latency.
arXiv Detail & Related papers (2023-05-15T10:09:07Z) - Scaling Forward Gradient With Local Losses [117.22685584919756]
Forward learning is a biologically plausible alternative to backprop for learning deep neural networks.
We show that it is possible to substantially reduce the variance of the forward gradient by applying perturbations to activations rather than weights.
Our approach matches backprop on MNIST and CIFAR-10 and significantly outperforms previously proposed backprop-free algorithms on ImageNet.
arXiv Detail & Related papers (2022-10-07T03:52:27Z) - Scaling Private Deep Learning with Low-Rank and Sparse Gradients [5.14780936727027]
We propose a framework that exploits the low-rank and sparse structure of neural networks to reduce the dimension of gradient updates.
A novel strategy is utilized to sparsify the gradients, resulting in low-dimensional, less noisy updates.
Empirical evaluation on natural language processing and computer vision tasks shows that our method outperforms other state-of-the-art baselines.
arXiv Detail & Related papers (2022-07-06T14:09:47Z) - Distribution Mismatch Correction for Improved Robustness in Deep Neural
Networks [86.42889611784855]
normalization methods increase the vulnerability with respect to noise and input corruptions.
We propose an unsupervised non-parametric distribution correction method that adapts the activation distribution of each layer.
In our experiments, we empirically show that the proposed method effectively reduces the impact of intense image corruptions.
arXiv Detail & Related papers (2021-10-05T11:36:25Z) - Cogradient Descent for Dependable Learning [64.02052988844301]
We propose a dependable learning based on Cogradient Descent (CoGD) algorithm to address the bilinear optimization problem.
CoGD is introduced to solve bilinear problems when one variable is with sparsity constraint.
It can also be used to decompose the association of features and weights, which further generalizes our method to better train convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-06-20T04:28:20Z) - Sparse-Push: Communication- & Energy-Efficient Decentralized Distributed
Learning over Directed & Time-Varying Graphs with non-IID Datasets [2.518955020930418]
We propose Sparse-Push, a communication efficient decentralized distributed training algorithm.
The proposed algorithm enables 466x reduction in communication with only 1% degradation in performance.
We demonstrate how communication compression can lead to significant performance degradation in-case of non-IID datasets.
arXiv Detail & Related papers (2021-02-10T19:41:11Z) - An Efficient Statistical-based Gradient Compression Technique for
Distributed Training Systems [77.88178159830905]
Sparsity-Inducing Distribution-based Compression (SIDCo) is a threshold-based sparsification scheme that enjoys similar threshold estimation quality to deep gradient compression (DGC)
Our evaluation shows SIDCo speeds up training by up to 41:7%, 7:6%, and 1:9% compared to the no-compression baseline, Topk, and DGC compressors, respectively.
arXiv Detail & Related papers (2021-01-26T13:06:00Z) - Sparse Communication for Training Deep Networks [56.441077560085475]
Synchronous gradient descent (SGD) is the most common method used for distributed training of deep learning models.
In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers.
We study several compression schemes and identify how three key parameters affect the performance.
arXiv Detail & Related papers (2020-09-19T17:28:11Z) - PowerGossip: Practical Low-Rank Communication Compression in
Decentralized Deep Learning [62.440827696638664]
We introduce a simple algorithm that directly compresses the model differences between neighboring workers.
Inspired by the PowerSGD for centralized deep learning, this algorithm uses power steps to maximize the information transferred per bit.
arXiv Detail & Related papers (2020-08-04T09:14:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.