Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse
Training
- URL: http://arxiv.org/abs/2306.12230v2
- Date: Thu, 30 Nov 2023 03:44:26 GMT
- Title: Fantastic Weights and How to Find Them: Where to Prune in Dynamic Sparse
Training
- Authors: Aleksandra I. Nowak, Bram Grooten, Decebal Constantin Mocanu, Jacek
Tabor
- Abstract summary: We study the influence of pruning criteria on Dynamic Sparse Training (DST) performance.
We find that most of the studied methods yield similar results.
The best performance is predominantly given by the simplest technique: magnitude-based pruning.
- Score: 58.47622737624532
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Dynamic Sparse Training (DST) is a rapidly evolving area of research that
seeks to optimize the sparse initialization of a neural network by adapting its
topology during training. It has been shown that under specific conditions, DST
is able to outperform dense models. The key components of this framework are
the pruning and growing criteria, which are repeatedly applied during the
training process to adjust the network's sparse connectivity. While the growing
criterion's impact on DST performance is relatively well studied, the influence
of the pruning criterion remains overlooked. To address this issue, we design
and perform an extensive empirical analysis of various pruning criteria to
better understand their impact on the dynamics of DST solutions. Surprisingly,
we find that most of the studied methods yield similar results. The differences
become more significant in the low-density regime, where the best performance
is predominantly given by the simplest technique: magnitude-based pruning. The
code is provided at https://github.com/alooow/fantastic_weights_paper
Related papers
- Continual Learning with Dynamic Sparse Training: Exploring Algorithms
for Effective Model Updates [13.983410740333788]
Continual learning (CL) refers to the ability of an intelligent system to sequentially acquire and retain knowledge from a stream of data with as little computational overhead as possible.
Dynamic Sparse Training (DST) is a prominent way to find these sparse networks and isolate them for each task.
This paper is the first empirical study investigating the effect of different DST components under the CL paradigm.
arXiv Detail & Related papers (2023-08-28T18:31:09Z) - Accurate Neural Network Pruning Requires Rethinking Sparse Optimization [87.90654868505518]
We show the impact of high sparsity on model training using the standard computer vision and natural language processing sparsity benchmarks.
We provide new approaches for mitigating this issue for both sparse pre-training of vision models and sparse fine-tuning of language models.
arXiv Detail & Related papers (2023-08-03T21:49:14Z) - What to Prune and What Not to Prune at Initialization [0.0]
Post-training dropout based approaches achieve high sparsity.
Initialization pruning is more efficacious when it comes to scaling computation cost of the network.
The goal is to achieve higher sparsity while preserving performance.
arXiv Detail & Related papers (2022-09-06T03:48:10Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Accelerating Deep Learning with Dynamic Data Pruning [0.0]
Deep learning has become prohibitively costly, requiring access to powerful computing systems to train state-of-the-art networks.
Previous work, such as forget scores and GraNd/EL2N scores, identify important samples within a full dataset and pruning the remaining samples, thereby reducing the iterations per epoch.
We propose two algorithms, based on reinforcement learning techniques, to dynamically prune samples and achieve even higher accuracy than the random dynamic method.
arXiv Detail & Related papers (2021-11-24T16:47:34Z) - Back to Basics: Efficient Network Compression via IMP [22.586474627159287]
Iterative Magnitude Pruning (IMP) is one of the most established approaches for network pruning.
IMP is often argued that it reaches suboptimal states by not incorporating sparsification into the training phase.
We find that IMP with SLR for retraining can outperform state-of-the-art pruning-during-training approaches.
arXiv Detail & Related papers (2021-11-01T11:23:44Z) - Sparse Training via Boosting Pruning Plasticity with Neuroregeneration [79.78184026678659]
We study the effect of pruning throughout training from the perspective of pruning plasticity.
We design a novel gradual magnitude pruning (GMP) method, named gradual pruning with zero-cost neuroregeneration (GraNet) and its dynamic sparse training (DST) variant (GraNet-ST)
Perhaps most impressively, the latter for the first time boosts the sparse-to-sparse training performance over various dense-to-sparse methods by a large margin with ResNet-50 on ImageNet.
arXiv Detail & Related papers (2021-06-19T02:09:25Z) - A Partial Regularization Method for Network Compression [0.0]
We propose an approach of partial regularization rather than the original form of penalizing all parameters, which is said to be full regularization, to conduct model compression at a higher speed.
Experimental results show that as we expected, the computational complexity is reduced by observing less running time in almost all situations.
Surprisingly, it helps to improve some important metrics such as regression fitting results and classification accuracy in both training and test phases on multiple datasets.
arXiv Detail & Related papers (2020-09-03T00:38:27Z) - AdaS: Adaptive Scheduling of Stochastic Gradients [50.80697760166045]
We introduce the notions of textit"knowledge gain" and textit"mapping condition" and propose a new algorithm called Adaptive Scheduling (AdaS)
Experimentation reveals that, using the derived metrics, AdaS exhibits: (a) faster convergence and superior generalization over existing adaptive learning methods; and (b) lack of dependence on a validation set to determine when to stop training.
arXiv Detail & Related papers (2020-06-11T16:36:31Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.