Sparse Spiking Gradient Descent
- URL: http://arxiv.org/abs/2105.08810v1
- Date: Tue, 18 May 2021 20:00:55 GMT
- Title: Sparse Spiking Gradient Descent
- Authors: Nicolas Perez-Nieves and Dan F.M. Goodman
- Abstract summary: We present the first sparse SNN backpropagation algorithm which achieves the same or better accuracy as current state of the art methods.
We show the effectiveness of our method on real datasets of varying complexity.
- Score: 2.741266294612776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: There is an increasing interest in emulating Spiking Neural Networks (SNNs)
on neuromorphic computing devices due to their low energy consumption. Recent
advances have allowed training SNNs to a point where they start to compete with
traditional Artificial Neural Networks (ANNs) in terms of accuracy, while at
the same time being energy efficient when run on neuromorphic hardware.
However, the process of training SNNs is still based on dense tensor operations
originally developed for ANNs which do not leverage the spatiotemporally sparse
nature of SNNs. We present here the first sparse SNN backpropagation algorithm
which achieves the same or better accuracy as current state of the art methods
while being significantly faster and more memory efficient. We show the
effectiveness of our method on real datasets of varying complexity
(Fashion-MNIST, Neuromophic-MNIST and Spiking Heidelberg Digits) achieving a
speedup in the backward pass of up to 70x, and 40% more memory efficient,
without losing accuracy.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks [5.587069105667678]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
It remains a challenge to train SNNs due to their undifferentiable spiking mechanism.
We propose Leaky Integrate-and-Fire Neuron-based SNNs and Complementary Leaky Integrate-and-Fire Neuron.
arXiv Detail & Related papers (2024-02-07T08:51:57Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Accelerating spiking neural network training [1.6114012813668934]
Spiking neural networks (SNN) are a type of artificial network inspired by the use of action potentials in the brain.
We propose a new technique for directly training single-spike-per-neur-on SNNs which eliminates all sequential computation and relies exclusively on vectorised operations.
Our proposed solution manages to solve certain tasks with over a $95.68 %$ reduction in spike counts relative to a conventionally trained SNN, which could significantly reduce energy requirements when deployed on neuromorphic computers.
arXiv Detail & Related papers (2022-05-30T17:48:14Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Going Deeper With Directly-Trained Larger Spiking Neural Networks [20.40894876501739]
Spiking neural networks (SNNs) are promising in coding for bio-usible information and event-driven signal processing.
However, the unique working mode of SNNs makes them more difficult to train than traditional networks.
We propose a CIF-dependent batch normalization (tpladBN) method based on the emerging-temporal backproation threshold.
arXiv Detail & Related papers (2020-10-29T07:15:52Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.