Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales
for Pruning Recurrent SNN
- URL: http://arxiv.org/abs/2403.03409v1
- Date: Wed, 6 Mar 2024 02:36:15 GMT
- Title: Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales
for Pruning Recurrent SNN
- Authors: Biswadeep Chakraborty, Beomseok Kang, Harshit Kumar and Saibal
Mukhopadhyay
- Abstract summary: Spiking Neural Networks (RSNNs) have emerged as a computationally efficient and brain-inspired learning model.
Traditionally, sparse SNNs are obtained by first training a dense and complex SNN for a target task.
This paper presents a task-agnostic methodology for designing sparse RSNNs by pruning a large randomly model.
- Score: 19.551319330414085
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recurrent Spiking Neural Networks (RSNNs) have emerged as a computationally
efficient and brain-inspired learning model. The design of sparse RSNNs with
fewer neurons and synapses helps reduce the computational complexity of RSNNs.
Traditionally, sparse SNNs are obtained by first training a dense and complex
SNN for a target task, and, then, pruning neurons with low activity
(activity-based pruning) while maintaining task performance. In contrast, this
paper presents a task-agnostic methodology for designing sparse RSNNs by
pruning a large randomly initialized model. We introduce a novel Lyapunov Noise
Pruning (LNP) algorithm that uses graph sparsification methods and utilizes
Lyapunov exponents to design a stable sparse RSNN from a randomly initialized
RSNN. We show that the LNP can leverage diversity in neuronal timescales to
design a sparse Heterogeneous RSNN (HRSNN). Further, we show that the same
sparse HRSNN model can be trained for different tasks, such as image
classification and temporal prediction. We experimentally show that, in spite
of being task-agnostic, LNP increases computational efficiency (fewer neurons
and synapses) and prediction performance of RSNNs compared to traditional
activity-based pruning of trained dense models.
Related papers
- Optimal ANN-SNN Conversion with Group Neurons [39.14228133571838]
Spiking Neural Networks (SNNs) have emerged as a promising third generation of neural networks.
The lack of effective learning algorithms remains a challenge for SNNs.
We introduce a novel type of neuron called Group Neurons (GNs)
arXiv Detail & Related papers (2024-02-29T11:41:12Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Examining the Robustness of Spiking Neural Networks on Non-ideal
Memristive Crossbars [4.184276171116354]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
We study the effect of crossbar non-idealities and intrinsicity on the performance of SNNs.
arXiv Detail & Related papers (2022-06-20T07:07:41Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spiking Neural Networks with Improved Inherent Recurrence Dynamics for
Sequential Learning [6.417011237981518]
Spiking neural networks (SNNs) with leaky integrate and fire (LIF) neurons can be operated in an event-driven manner.
We show that SNNs can be trained for sequential tasks and propose modifications to a network of LIF neurons.
We then develop a training scheme to train the proposed SNNs with improved inherent recurrence dynamics.
arXiv Detail & Related papers (2021-09-04T17:13:28Z) - Skip-Connected Self-Recurrent Spiking Neural Networks with Joint
Intrinsic Parameter and Synaptic Weight Training [14.992756670960008]
We propose a new type of RSNN called Skip-Connected Self-Recurrent SNNs (ScSr-SNNs)
ScSr-SNNs can boost performance by up to 2.55% compared with other types of RSNNs trained by state-of-the-art BP methods.
arXiv Detail & Related papers (2020-10-23T22:27:13Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.