Domain-specific Communication Optimization for Distributed DNN Training
- URL: http://arxiv.org/abs/2008.08445v1
- Date: Sun, 16 Aug 2020 09:53:21 GMT
- Title: Domain-specific Communication Optimization for Distributed DNN Training
- Authors: Hao Wang, Jingrong Chen, Xinchen Wan, Han Tian, Jiacheng Xia, Gaoxiong
Zeng, Weiyan Wang, Kai Chen, Wei Bai, Junchen Jiang
- Abstract summary: We present DLCP, a novel solution exploiting the domain-specific properties of deep learning to optimize communication overhead of DNN training in a fine-grained manner.
It exploits em bounded loss tolerance of SGD-based training to improve tail communication latency which cannot be avoided purely through gradient compression.
It then performs fine-grained packet-level prioritization and dropping, as opposed to flow-level scheduling, based on layers and magnitudes of gradients to further speedup model convergence without affecting accuracy.
- Score: 10.781867496460837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Communication overhead poses an important obstacle to distributed DNN
training and draws increasing attention in recent years. Despite continuous
efforts, prior solutions such as gradient compression/reduction,
compute/communication overlapping and layer-wise flow scheduling, etc., are
still coarse-grained and insufficient for an efficient distributed training
especially when the network is under pressure. We present DLCP, a novel
solution exploiting the domain-specific properties of deep learning to optimize
communication overhead of DNN training in a fine-grained manner. At its heart,
DLCP comprises of several key innovations beyond prior work: e.g., it exploits
{\em bounded loss tolerance} of SGD-based training to improve tail
communication latency which cannot be avoided purely through gradient
compression. It then performs fine-grained packet-level prioritization and
dropping, as opposed to flow-level scheduling, based on layers and magnitudes
of gradients to further speedup model convergence without affecting accuracy.
In addition, it leverages inter-packet order-independency to perform per-packet
load balancing without causing classical re-ordering issues. DLCP works with
both Parameter Server and collective communication routines. We have
implemented DLCP with commodity switches, integrated it with various training
frameworks including TensorFlow, MXNet and PyTorch, and deployed it in our
small-scale testbed with 10 Nvidia V100 GPUs. Our testbed experiments and
large-scale simulations show that DLCP delivers up to $84.3\%$ additional
training acceleration over the best existing solutions.
Related papers
- FusionLLM: A Decentralized LLM Training System on Geo-distributed GPUs with Adaptive Compression [55.992528247880685]
Decentralized training faces significant challenges regarding system design and efficiency.
We present FusionLLM, a decentralized training system designed and implemented for training large deep neural networks (DNNs)
We show that our system and method can achieve 1.45 - 9.39x speedup compared to baseline methods while ensuring convergence.
arXiv Detail & Related papers (2024-10-16T16:13:19Z) - Structure-Preserving Network Compression Via Low-Rank Induced Training Through Linear Layers Composition [11.399520888150468]
We present a theoretically-justified technique termed Low-Rank Induced Training (LoRITa)
LoRITa promotes low-rankness through the composition of linear layers and compresses by using singular value truncation.
We demonstrate the effectiveness of our approach using MNIST on Fully Connected Networks, CIFAR10 on Vision Transformers, and CIFAR10/100 and ImageNet on Convolutional Neural Networks.
arXiv Detail & Related papers (2024-05-06T00:58:23Z) - Accelerating Distributed Deep Learning using Lossless Homomorphic
Compression [17.654138014999326]
We introduce a novel compression algorithm that effectively merges worker-level compression with in-network aggregation.
We show up to a 6.33$times$ improvement in aggregation throughput and a 3.74$times$ increase in per-iteration training speed.
arXiv Detail & Related papers (2024-02-12T09:57:47Z) - Boosting Distributed Full-graph GNN Training with Asynchronous One-bit
Communication [23.883543151975136]
Training Graph Neural Networks (GNNs) on large graphs is challenging due to the conflict between the high memory demand and limited GPU memory.
We propose an efficient distributed GNN training framework Sylvie, which employs one-bit quantization computation technique in GNNs.
In detail, Sylvie provides a lightweight Low-bit Module to quantize the sent data and dequantize the received data back to full precision values in each layer.
arXiv Detail & Related papers (2023-03-02T14:02:39Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Federated Dynamic Sparse Training: Computing Less, Communicating Less,
Yet Learning Better [88.28293442298015]
Federated learning (FL) enables distribution of machine learning workloads from the cloud to resource-limited edge devices.
We develop, implement, and experimentally validate a novel FL framework termed Federated Dynamic Sparse Training (FedDST)
FedDST is a dynamic process that extracts and trains sparse sub-networks from the target full network.
arXiv Detail & Related papers (2021-12-18T02:26:38Z) - ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training [65.68511423300812]
We propose ProgFed, a progressive training framework for efficient and effective federated learning.
ProgFed inherently reduces computation and two-way communication costs while maintaining the strong performance of the final models.
Our results show that ProgFed converges at the same rate as standard training on full models.
arXiv Detail & Related papers (2021-10-11T14:45:00Z) - Training Recommender Systems at Scale: Communication-Efficient Model and
Data Parallelism [56.78673028601739]
We propose a compression framework called Dynamic Communication Thresholding (DCT) for communication-efficient hybrid training.
DCT reduces communication by at least $100times$ and $20times$ during DP and MP, respectively.
It improves end-to-end training time for a state-of-the-art industrial recommender model by 37%, without any loss in performance.
arXiv Detail & Related papers (2020-10-18T01:44:42Z) - Sparse Communication for Training Deep Networks [56.441077560085475]
Synchronous gradient descent (SGD) is the most common method used for distributed training of deep learning models.
In this algorithm, each worker shares its local gradients with others and updates the parameters using the average gradients of all workers.
We study several compression schemes and identify how three key parameters affect the performance.
arXiv Detail & Related papers (2020-09-19T17:28:11Z) - Is Network the Bottleneck of Distributed Training? [36.925680383195356]
We take a first-principles approach to measure and analyze the network performance of distributed training.
We find that the network is running at low utilization and that if the network can be fully utilized, distributed training can achieve a scaling factor of close to one.
arXiv Detail & Related papers (2020-06-17T19:00:31Z) - Caramel: Accelerating Decentralized Distributed Deep Learning with
Computation Scheduling [1.5785002371773138]
Caramel is a system that accelerates distributed deep learning through model-aware scheduling and communication optimizations for AllReduce.
Caramel maintains the correctness of the dataflow model, is hardware-independent, and does not require any user-level or framework-level changes.
arXiv Detail & Related papers (2020-04-29T08:32:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.