The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks
- URL: http://arxiv.org/abs/2302.04174v1
- Date: Wed, 8 Feb 2023 16:25:20 GMT
- Title: The Hardware Impact of Quantization and Pruning for Weights in Spiking
Neural Networks
- Authors: Clemens JS Schaefer, Pooria Taheri, Mark Horeni, and Siddharth Joshi
- Abstract summary: quantization and pruning of parameters can both compress the model size, reduce memory footprints, and facilitate low-latency execution.
We study various combinations of pruning and quantization in isolation, cumulatively, and simultaneously to a state-of-the-art SNN targeting gesture recognition.
We show that this state-of-the-art model is amenable to aggressive parameter quantization, not suffering from any loss in accuracy down to ternary weights.
- Score: 0.368986335765876
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Energy efficient implementations and deployments of Spiking neural networks
(SNNs) have been of great interest due to the possibility of developing
artificial systems that can achieve the computational powers and energy
efficiency of the biological brain. Efficient implementations of SNNs on modern
digital hardware are also inspired by advances in machine learning and deep
neural networks (DNNs). Two techniques widely employed in the efficient
deployment of DNNs -- the quantization and pruning of parameters, can both
compress the model size, reduce memory footprints, and facilitate low-latency
execution. The interaction between quantization and pruning and how they might
impact model performance on SNN accelerators is currently unknown. We study
various combinations of pruning and quantization in isolation, cumulatively,
and simultaneously (jointly) to a state-of-the-art SNN targeting gesture
recognition for dynamic vision sensor cameras (DVS). We show that this
state-of-the-art model is amenable to aggressive parameter quantization, not
suffering from any loss in accuracy down to ternary weights. However, pruning
only maintains iso-accuracy up to 80% sparsity, which results in 45% more
energy than the best quantization on our architectural model. Applying both
pruning and quantization can result in an accuracy loss to offer a favourable
trade-off on the energy-accuracy Pareto-frontier for the given hardware
configuration.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Sparsity-Aware Hardware-Software Co-Design of Spiking Neural Networks: An Overview [1.0499611180329804]
Spiking Neural Networks (SNNs) are inspired by the sparse and event-driven nature of biological neural processing, and offer the potential for ultra-low-power artificial intelligence.
We explore the hardware-software co-design of sparse SNNs, examining how sparsity representation, hardware architectures, and training techniques influence hardware efficiency.
Our work aims to illuminate the path towards embedded neuromorphic systems that fully exploit the computational advantages of sparse SNNs.
arXiv Detail & Related papers (2024-08-26T17:22:11Z) - Q-SNNs: Quantized Spiking Neural Networks [12.719590949933105]
Spiking Neural Networks (SNNs) leverage sparse spikes to represent information and process them in an event-driven manner.
We introduce a lightweight and hardware-friendly Quantized SNN that applies quantization to both synaptic weights and membrane potentials.
We present a new Weight-Spike Dual Regulation (WS-DR) method inspired by information entropy theory.
arXiv Detail & Related papers (2024-06-19T16:23:26Z) - Hardware-Aware DNN Compression via Diverse Pruning and Mixed-Precision
Quantization [1.0235078178220354]
We propose an automated framework to compress Deep Neural Networks (DNNs) in a hardware-aware manner by jointly employing pruning and quantization.
Our framework achieves $39%$ average energy reduction for datasets $1.7%$ average accuracy loss and outperforms significantly the state-of-the-art approaches.
arXiv Detail & Related papers (2023-12-23T18:50:13Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - Low Precision Quantization-aware Training in Spiking Neural Networks
with Differentiable Quantization Function [0.5046831208137847]
This work aims to bridge the gap between recent progress in quantized neural networks and spiking neural networks.
It presents an extensive study on the performance of the quantization function, represented as a linear combination of sigmoid functions.
The presented quantization function demonstrates the state-of-the-art performance on four popular benchmarks.
arXiv Detail & Related papers (2023-05-30T09:42:05Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Low-Precision Training in Logarithmic Number System using Multiplicative
Weight Update [49.948082497688404]
Training large-scale deep neural networks (DNNs) currently requires a significant amount of energy, leading to serious environmental impacts.
One promising approach to reduce the energy costs is representing DNNs with low-precision numbers.
We jointly design a lowprecision training framework involving a logarithmic number system (LNS) and a multiplicative weight update training method, termed LNS-Madam.
arXiv Detail & Related papers (2021-06-26T00:32:17Z) - Ps and Qs: Quantization-aware pruning for efficient low latency neural
network inference [56.24109486973292]
We study the interplay between pruning and quantization during the training of neural networks for ultra low latency applications.
We find that quantization-aware pruning yields more computationally efficient models than either pruning or quantization alone for our task.
arXiv Detail & Related papers (2021-02-22T19:00:05Z) - FSpiNN: An Optimization Framework for Memory- and Energy-Efficient
Spiking Neural Networks [14.916996986290902]
Spiking Neural Networks (SNNs) offer unsupervised learning capability due to the spike-timing-dependent plasticity (STDP) rule.
However, state-of-the-art SNNs require a large memory footprint to achieve high accuracy.
We propose FSpiNN, an optimization framework for obtaining memory- and energy-efficient SNNs for training and inference processing.
arXiv Detail & Related papers (2020-07-17T09:40:26Z) - Widening and Squeezing: Towards Accurate and Efficient QNNs [125.172220129257]
Quantization neural networks (QNNs) are very attractive to the industry because their extremely cheap calculation and storage overhead, but their performance is still worse than that of networks with full-precision parameters.
Most of existing methods aim to enhance performance of QNNs especially binary neural networks by exploiting more effective training techniques.
We address this problem by projecting features in original full-precision networks to high-dimensional quantization features.
arXiv Detail & Related papers (2020-02-03T04:11:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.