STOP: Spatiotemporal Orthogonal Propagation for Weight-Threshold-Leakage Synergistic Training of Deep Spiking Neural Networks
- URL: http://arxiv.org/abs/2411.11082v2
- Date: Wed, 27 Nov 2024 15:49:49 GMT
- Title: STOP: Spatiotemporal Orthogonal Propagation for Weight-Threshold-Leakage Synergistic Training of Deep Spiking Neural Networks
- Authors: Haoran Gao, Xichuan Zhou, Yingcheng Lin, Min Tian, Liyuan Liu, Cong Shi,
- Abstract summary: Deep brain- spiking neural network (SNN) models lack efficient and high-accuracy deep SNN learning algorithms.
Our algorithm enables fully synergistic learning as well as firing thresholds and leakage factors in spiking neurons to improve SNN accuracy.
Characteristically, spatially-backward neuronal errors and temporal-forward traces propagate to and independently of each other, substantially reducing computational complexity.
- Score: 11.85044871205734
- License:
- Abstract: The prevailing of artificial intelligence-of-things calls for higher energy-efficient edge computing paradigms, such as neuromorphic agents leveraging brain-inspired spiking neural network (SNN) models based on spatiotemporally sparse binary spikes. However, the lack of efficient and high-accuracy deep SNN learning algorithms prevents them from practical edge deployments at a strictly bounded cost. In this paper, we propose the spatiotemporal orthogonal propagation (STOP) algorithm to tackle this challenge. Our algorithm enables fully synergistic learning of synaptic weights as well as firing thresholds and leakage factors in spiking neurons to improve SNN accuracy, in a unified temporally-forward trace-based framework to mitigate the huge memory requirement for storing neural states across all time-steps in the forward pass. Characteristically, the spatially-backward neuronal errors and temporally-forward traces propagate orthogonally to and independently of each other, substantially reducing computational complexity. Our STOP algorithm obtained high recognition accuracies of 94.84%, 74.92%, 98.26% and 77.10% on the CIFAR-10, CIFAR-100, DVS-Gesture and DVS-CIFAR10 datasets with adequate deep convolutional SNNs of VGG-11 or ResNet-18 structures. Compared with other deep SNN training algorithms, our method is more plausible for edge intelligent scenarios where resources are limited but high-accuracy in-situ learning is desired.
Related papers
- Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - FTBC: Forward Temporal Bias Correction for Optimizing ANN-SNN Conversion [16.9748086865693]
Spiking Neural Networks (SNNs) offer a promising avenue for energy-efficient computing compared with Artificial Neural Networks (ANNs)
In this work, we introduce a lightweight Forward Temporal Bias (FTBC) technique, aimed at enhancing conversion accuracy without the computational overhead.
We further propose an algorithm for finding the temporal bias only in the forward pass, thus eliminating the computational burden of backpropagation.
arXiv Detail & Related papers (2024-03-27T09:25:20Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - PC-SNN: Supervised Learning with Local Hebbian Synaptic Plasticity based
on Predictive Coding in Spiking Neural Networks [1.6172800007896282]
We propose a novel learning algorithm inspired by predictive coding theory.
We show that it can perform supervised learning fully autonomously and successfully as the backprop.
This method achieves a favorable performance compared to the state-of-the-art multi-layer SNNs.
arXiv Detail & Related papers (2022-11-24T09:56:02Z) - Ultra-low Latency Adaptive Local Binary Spiking Neural Network with
Accuracy Loss Estimator [4.554628904670269]
We propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators.
Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy.
arXiv Detail & Related papers (2022-07-31T09:03:57Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Advancing Deep Residual Learning by Solving the Crux of Degradation in
Spiking Neural Networks [21.26300397341615]
Residual learning and shortcuts have been evidenced as an important approach for training deep neural networks.
This paper proposes a novel residual block for SNNs, which is able to significantly extend the depth of directly trained SNNs.
arXiv Detail & Related papers (2021-12-09T06:29:00Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Convolutional Spiking Neural Networks for Spatio-Temporal Feature
Extraction [3.9898522485253256]
Spiking neural networks (SNNs) can be used in low-power and embedded systems.
temporal coding in layers of convolutional neural networks and other types of SNNs has yet to be studied.
We present a new deep spiking architecture to tackle real-world problems.
arXiv Detail & Related papers (2020-03-27T11:58:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.