Spike time displacement based error backpropagation in convolutional
spiking neural networks
- URL: http://arxiv.org/abs/2108.13621v1
- Date: Tue, 31 Aug 2021 05:18:59 GMT
- Title: Spike time displacement based error backpropagation in convolutional
spiking neural networks
- Authors: Maryam Mirsadeghi, Majid Shalchian, Saeed Reza Kheradpisheh,
Timoth\'ee Masquelier
- Abstract summary: In this paper, we extend the STiDi-BP algorithm to employ it in deeper and convolutional architectures.
The evaluation results on the image classification task based on two popular benchmarks, MNIST and Fashion-MNIST, confirm that this algorithm has been applicable in deep SNNs.
We consider a convolutional SNN with two sets of weights: real-valued weights that are updated in the backward pass and their signs, binary weights, that are employed in the feedforward process.
- Score: 0.6193838300896449
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We recently proposed the STiDi-BP algorithm, which avoids backward recursive
gradient computation, for training multi-layer spiking neural networks (SNNs)
with single-spike-based temporal coding. The algorithm employs a linear
approximation to compute the derivative of the spike latency with respect to
the membrane potential and it uses spiking neurons with piecewise linear
postsynaptic potential to reduce the computational cost and the complexity of
neural processing. In this paper, we extend the STiDi-BP algorithm to employ it
in deeper and convolutional architectures. The evaluation results on the image
classification task based on two popular benchmarks, MNIST and Fashion-MNIST
datasets with the accuracies of respectively 99.2% and 92.8%, confirm that this
algorithm has been applicable in deep SNNs. Another issue we consider is the
reduction of memory storage and computational cost. To do so, we consider a
convolutional SNN (CSNN) with two sets of weights: real-valued weights that are
updated in the backward pass and their signs, binary weights, that are employed
in the feedforward process. We evaluate the binary CSNN on two datasets of
MNIST and Fashion-MNIST and obtain acceptable performance with a negligible
accuracy drop with respect to real-valued weights (about $0.6%$ and $0.8%$
drops, respectively).
Related papers
- FTBC: Forward Temporal Bias Correction for Optimizing ANN-SNN Conversion [16.9748086865693]
Spiking Neural Networks (SNNs) offer a promising avenue for energy-efficient computing compared with Artificial Neural Networks (ANNs)
In this work, we introduce a lightweight Forward Temporal Bias (FTBC) technique, aimed at enhancing conversion accuracy without the computational overhead.
We further propose an algorithm for finding the temporal bias only in the forward pass, thus eliminating the computational burden of backpropagation.
arXiv Detail & Related papers (2024-03-27T09:25:20Z) - TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network
Training [27.565726483503838]
We introduce Train Decomposition for Spiking Neural Networks (TT-SNN)
TT-SNN reduces model size through trainable weight decomposition, resulting in reduced storage, FLOPs, and latency.
We also propose a parallel computation as an alternative to the typical sequential tensor computation.
arXiv Detail & Related papers (2024-01-15T23:08:19Z) - MST-compression: Compressing and Accelerating Binary Neural Networks
with Minimum Spanning Tree [21.15961593182111]
Binary neural networks (BNNs) have been widely adopted to reduce the computational cost and memory storage on edge-computing devices.
However, as neural networks become wider/deeper to improve accuracy and meet practical requirements, the computational burden remains a significant challenge even on the binary version.
This paper proposes a novel method called Minimum Spanning Tree (MST) compression that learns to compress and accelerate BNNs.
arXiv Detail & Related papers (2023-08-26T02:42:12Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Bayesian Neural Network Language Modeling for Speech Recognition [59.681758762712754]
State-of-the-art neural network language models (NNLMs) represented by long short term memory recurrent neural networks (LSTM-RNNs) and Transformers are becoming highly complex.
In this paper, an overarching full Bayesian learning framework is proposed to account for the underlying uncertainty in LSTM-RNN and Transformer LMs.
arXiv Detail & Related papers (2022-08-28T17:50:19Z) - A temporally and spatially local spike-based backpropagation algorithm
to enable training in hardware [0.0]
Spiking Neural Networks (SNNs) have emerged as a hardware efficient architecture for classification tasks.
There have been several attempts to adopt the powerful backpropagation (BP) technique used in non-spiking artificial neural networks (ANNs)
arXiv Detail & Related papers (2022-07-20T08:57:53Z) - Low-bit Quantization of Recurrent Neural Network Language Models Using
Alternating Direction Methods of Multipliers [67.688697838109]
This paper presents a novel method to train quantized RNNLMs from scratch using alternating direction methods of multipliers (ADMM)
Experiments on two tasks suggest the proposed ADMM quantization achieved a model size compression factor of up to 31 times over the full precision baseline RNNLMs.
arXiv Detail & Related papers (2021-11-29T09:30:06Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.