Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
Trainable Masked Layers
- URL: http://arxiv.org/abs/2005.06870v1
- Date: Thu, 14 May 2020 11:05:21 GMT
- Title: Dynamic Sparse Training: Find Efficient Sparse Network From Scratch With
Trainable Masked Layers
- Authors: Junjie Liu, Zhe Xu, Runbin Shi, Ray C. C. Cheung, Hayden K.H. So
- Abstract summary: We present a novel network pruning algorithm called Dynamic Sparse Training that can jointly find the optimal network parameters and sparse network structure.
We demonstrate that our dynamic sparse training algorithm can easily train very sparse neural network models with little performance loss.
- Score: 18.22501196339569
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel network pruning algorithm called Dynamic Sparse Training
that can jointly find the optimal network parameters and sparse network
structure in a unified optimization process with trainable pruning thresholds.
These thresholds can have fine-grained layer-wise adjustments dynamically via
backpropagation. We demonstrate that our dynamic sparse training algorithm can
easily train very sparse neural network models with little performance loss
using the same number of training epochs as dense models. Dynamic Sparse
Training achieves the state of the art performance compared with other sparse
training algorithms on various network architectures. Additionally, we have
several surprising observations that provide strong evidence for the
effectiveness and efficiency of our algorithm. These observations reveal the
underlying problems of traditional three-stage pruning algorithms and present
the potential guidance provided by our algorithm to the design of more compact
network architectures.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.