Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and
Dataflow Co-Design
- URL: http://arxiv.org/abs/2309.13015v1
- Date: Fri, 22 Sep 2023 17:26:19 GMT
- Title: Efficient N:M Sparse DNN Training Using Algorithm, Architecture, and
Dataflow Co-Design
- Authors: Chao Fang, Wei Sun, Aojun Zhou, Zhongfeng Wang
- Abstract summary: This paper presents a computation-efficient training scheme for N:M sparse DNNs using algorithm, architecture, and dataflow co-design.
At the algorithm level, a bidirectional weight pruning method, dubbed BDWP, is proposed to leverage the N:M sparsity of weights.
At the architecture level, a sparse accelerator for DNN training, namely SAT, is developed to support both the regular dense operations and the computation-efficient N:M sparse operations.
- Score: 15.47240906902083
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sparse training is one of the promising techniques to reduce the
computational cost of DNNs while retaining high accuracy. In particular, N:M
fine-grained structured sparsity, where only N out of consecutive M elements
can be nonzero, has attracted attention due to its hardware-friendly pattern
and capability of achieving a high sparse ratio. However, the potential to
accelerate N:M sparse DNN training has not been fully exploited, and there is a
lack of efficient hardware supporting N:M sparse training. To tackle these
challenges, this paper presents a computation-efficient training scheme for N:M
sparse DNNs using algorithm, architecture, and dataflow co-design. At the
algorithm level, a bidirectional weight pruning method, dubbed BDWP, is
proposed to leverage the N:M sparsity of weights during both forward and
backward passes of DNN training, which can significantly reduce the
computational cost while maintaining model accuracy. At the architecture level,
a sparse accelerator for DNN training, namely SAT, is developed to neatly
support both the regular dense operations and the computation-efficient N:M
sparse operations. At the dataflow level, multiple optimization methods ranging
from interleave mapping, pre-generation of N:M sparse weights, and offline
scheduling, are proposed to boost the computational efficiency of SAT. Finally,
the effectiveness of our training scheme is evaluated on a Xilinx VCU1525 FPGA
card using various DNN models and datasets. Experimental results show the SAT
accelerator with the BDWP sparse training method under 2:8 sparse ratio
achieves an average speedup of 1.75x over that with the dense training,
accompanied by a negligible accuracy loss of 0.56% on average. Furthermore, our
proposed training scheme significantly improves the training throughput by
2.97~25.22x and the energy efficiency by 1.36~3.58x over prior FPGA-based
accelerators.
Related papers
- Exploiting Symmetric Temporally Sparse BPTT for Efficient RNN Training [20.49255973077044]
This work describes a training algorithm for Delta RNNs that exploits temporal sparsity in the backward propagation phase to reduce computational requirements for training on the edge.
Results show a reduction of $sim$80% in matrix operations for training a 56k parameter Delta LSTM on the Fluent Speech Commands dataset with negligible accuracy loss.
We show that the proposed Delta RNN training will be useful for online incremental learning on edge devices with limited computing resources.
arXiv Detail & Related papers (2023-12-14T23:07:37Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Recurrent Bilinear Optimization for Binary Neural Networks [58.972212365275595]
BNNs neglect the intrinsic bilinear relationship of real-valued weights and scale factors.
Our work is the first attempt to optimize BNNs from the bilinear perspective.
We obtain robust RBONNs, which show impressive performance over state-of-the-art BNNs on various models and datasets.
arXiv Detail & Related papers (2022-09-04T06:45:33Z) - DNN Training Acceleration via Exploring GPGPU Friendly Sparsity [16.406482603838157]
We propose the Approximate Random Dropout that replaces the conventional random dropout of neurons and synapses with a regular and online generated row-based or tile-based dropout patterns.
We then develop a SGD-based Search Algorithm that produces the distribution of row-based or tile-based dropout patterns to compensate for the potential accuracy loss.
We also propose the sensitivity-aware dropout method to dynamically drop the input feature maps based on their sensitivity so as to achieve greater forward and backward training acceleration.
arXiv Detail & Related papers (2022-03-11T01:32:03Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - FracTrain: Fractionally Squeezing Bit Savings Both Temporally and
Spatially for Efficient DNN Training [81.85361544720885]
We propose FracTrain that integrates progressive fractional quantization which gradually increases the precision of activations, weights, and gradients.
FracTrain reduces computational cost and hardware-quantified energy/latency of DNN training while achieving a comparable or better (-0.12%+1.87%) accuracy.
arXiv Detail & Related papers (2020-12-24T05:24:10Z) - Procrustes: a Dataflow and Accelerator for Sparse Deep Neural Network
Training [0.5219568203653523]
We develop a sparse DNN training accelerator that produces pruned models with the same accuracy as dense models without first training, then pruning, and finally retraining, a dense model.
Compared to training the equivalent unpruned models using a state-of-the-art DNN accelerator without sparse training support, Procrustes consumes up to 3.26$times$ less energy and offers up to 4$times$ speedup across a range of models, while pruning weights by an order of magnitude and maintaining unpruned accuracy.
arXiv Detail & Related papers (2020-09-23T07:39:55Z) - Learning Low-rank Deep Neural Networks via Singular Vector Orthogonality
Regularization and Singular Value Sparsification [53.50708351813565]
We propose SVD training, the first method to explicitly achieve low-rank DNNs during training without applying SVD on every step.
We empirically show that SVD training can significantly reduce the rank of DNN layers and achieve higher reduction on computation load under the same accuracy.
arXiv Detail & Related papers (2020-04-20T02:40:43Z) - A New MRAM-based Process In-Memory Accelerator for Efficient Neural
Network Training with Floating Point Precision [28.458719513745812]
We propose a spin orbit torque magnetic random access memory (SOT-MRAM) based digital PIM accelerator that supports floating point precision.
Experiment results show that the proposed SOT-MRAM PIM based DNN training accelerator can achieve 3.3$times$, 1.8$times$, and 2.5$times$ improvement in terms of energy, latency, and area.
arXiv Detail & Related papers (2020-03-02T04:58:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.