Distributed Deep Learning using Stochastic Gradient Staleness
- URL: http://arxiv.org/abs/2509.05679v1
- Date: Sat, 06 Sep 2025 11:05:40 GMT
- Title: Distributed Deep Learning using Stochastic Gradient Staleness
- Authors: Viet Hoang Pham, Hyo-Sung Ahn,
- Abstract summary: High performing deep neural networks (DNNs) tend to become increasingly deep and require extensive training datasets.<n>This paper introduces a distributed training method that integrates data parallelism and fully decoupled parallel backpropagation algorithm.
- Score: 4.254099382808598
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the notable success of deep neural networks (DNNs) in solving complex tasks, the training process still remains considerable challenges. A primary obstacle is the substantial time required for training, particularly as high performing DNNs tend to become increasingly deep (characterized by a larger number of hidden layers) and require extensive training datasets. To address these challenges, this paper introduces a distributed training method that integrates two prominent strategies for accelerating deep learning: data parallelism and fully decoupled parallel backpropagation algorithm. By utilizing multiple computational units operating in parallel, the proposed approach enhances the amount of training data processed in each iteration while mitigating locking issues commonly associated with the backpropagation algorithm. These features collectively contribute to significant improvements in training efficiency. The proposed distributed training method is rigorously proven to converge to critical points under certain conditions. Its effectiveness is further demonstrated through empirical evaluations, wherein an DNN is trained to perform classification tasks on the CIFAR-10 dataset.
Related papers
- Prior-Fitted Networks Scale to Larger Datasets When Treated as Weak Learners [82.72552644267724]
BoostPFN can outperform standard PFNs with the same size of training samples in large datasets.<n>High performance is maintained for up to 50x of the pre-training size of PFNs.
arXiv Detail & Related papers (2025-03-03T07:31:40Z) - Efficient Training of Deep Neural Operator Networks via Randomized Sampling [0.0]
We introduce a random sampling technique to be adopted the training of DeepONet.<n>We demonstrate substantial reductions in training time while achieving comparable or lower overall test errors relative to the traditional training approach.<n>Our results indicate that incorporating randomization in the trunk network inputs during training enhances the efficiency and robustness of DeepONet.
arXiv Detail & Related papers (2024-09-20T07:18:31Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - Efficient Deep Learning with Decorrelated Backpropagation [1.9731499060686393]
We show for the first time that much more efficient training of deep convolutional neural networks is feasible by embracing decorrelated backpropagation as a mechanism for learning.<n>We achieve a more than two-fold speed-up and higher test accuracy compared to backpropagation when training several deep networks up to a 50-layer ResNet model.
arXiv Detail & Related papers (2024-05-03T17:21:13Z) - Stochastic Unrolled Federated Learning [85.6993263983062]
We introduce UnRolled Federated learning (SURF), a method that expands algorithm unrolling to federated learning.
Our proposed method tackles two challenges of this expansion, namely the need to feed whole datasets to the unrolleds and the decentralized nature of federated learning.
arXiv Detail & Related papers (2023-05-24T17:26:22Z) - Efficient Training of Multi-task Neural Solver for Combinatorial Optimization [23.694457372640912]
We propose a general and efficient training paradigm to deliver a unified multi-task neural solver.<n>Our method significantly enhances overall performance, regardless of whether it is within constrained training budgets.<n>Our method also achieved the best results compared to single task learning and multitask learning approaches.
arXiv Detail & Related papers (2023-05-10T14:20:34Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - DBS: Dynamic Batch Size For Distributed Deep Neural Network Training [19.766163856388694]
We propose the Dynamic Batch Size (DBS) strategy for the distributedtraining of Deep Neural Networks (DNNs)
Specifically, the performance of each worker is evaluatedfirst based on the fact in the previous epoch, and then the batch size and dataset partition are dynamically adjusted.
The experimental results indicate that the proposed strategy can fully utilizethe performance of the cluster, reduce the training time, and have good robustness with disturbance by irrelevant tasks.
arXiv Detail & Related papers (2020-07-23T07:31:55Z) - Understanding the Effects of Data Parallelism and Sparsity on Neural
Network Training [126.49572353148262]
We study two factors in neural network training: data parallelism and sparsity.
Despite their promising benefits, understanding of their effects on neural network training remains elusive.
arXiv Detail & Related papers (2020-03-25T10:49:22Z) - Subset Sampling For Progressive Neural Network Learning [106.12874293597754]
Progressive Neural Network Learning is a class of algorithms that incrementally construct the network's topology and optimize its parameters based on the training data.
We propose to speed up this process by exploiting subsets of training data at each incremental training step.
Experimental results in object, scene and face recognition problems demonstrate that the proposed approach speeds up the optimization procedure considerably.
arXiv Detail & Related papers (2020-02-17T18:57:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.