Efficiently Training Time-to-First-Spike Spiking Neural Networks from Scratch
- URL: http://arxiv.org/abs/2410.23619v2
- Date: Mon, 10 Mar 2025 05:08:32 GMT
- Title: Efficiently Training Time-to-First-Spike Spiking Neural Networks from Scratch
- Authors: Kaiwei Che, Wei Fang, Zhengyu Ma, Yifan Huang, Peng Xue, Li Yuan, Timothée Masquelier, Yonghong Tian,
- Abstract summary: Spiking Neural Networks (SNNs) are well-suited for energy-efficient neuromorphic hardware.<n>Time-to-First-Spike (TTFS) coding, which uses a single spike per neuron, offers extreme sparsity and energy efficiency but suffers from unstable training and low accuracy due to its sparse firing.<n>We propose a training framework incorporating parameter normalization, training normalization, temporal output decoding, and pooling layer re-evaluation.<n>Experiments show the framework stabilizes and accelerates training, reduces latency, and achieves state-of-the-art accuracy for TTFS SNNs on M
- Score: 39.05124192217359
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs), with their event-driven and biologically inspired operation, are well-suited for energy-efficient neuromorphic hardware. Neural coding, critical to SNNs, determines how information is represented via spikes. Time-to-First-Spike (TTFS) coding, which uses a single spike per neuron, offers extreme sparsity and energy efficiency but suffers from unstable training and low accuracy due to its sparse firing. To address these challenges, we propose a training framework incorporating parameter initialization, training normalization, temporal output decoding, and pooling layer re-evaluation. The proposed parameter initialization and training normalization mitigate signal diminishing and gradient vanishing to stabilize training. The output decoding method aggregates temporal spikes to encourage earlier firing, thereby reducing the latency. The re-evaluation of the pooling layer indicates that average-pooling keeps the single-spike characteristic and that max-pooling should be avoided. Experiments show the framework stabilizes and accelerates training, reduces latency, and achieves state-of-the-art accuracy for TTFS SNNs on MNIST (99.48%), Fashion-MNIST (92.90%), CIFAR10 (90.56%), and DVS Gesture (95.83%).
Related papers
- Efficient Event-based Delay Learning in Spiking Neural Networks [0.1350479308585481]
Spiking Neural Networks (SNNs) are attracting increased attention as an energy-efficient alternative to traditional Neural Networks.
We propose a novel event-based training method for SNNs, grounded in the EventPropProp formalism.
We show that our approach uses less than half the memory of the current state-of-the-art delay-learning method and is up to 26x faster.
arXiv Detail & Related papers (2025-01-13T13:44:34Z) - TT-SNN: Tensor Train Decomposition for Efficient Spiking Neural Network
Training [27.565726483503838]
We introduce Train Decomposition for Spiking Neural Networks (TT-SNN)
TT-SNN reduces model size through trainable weight decomposition, resulting in reduced storage, FLOPs, and latency.
We also propose a parallel computation as an alternative to the typical sequential tensor computation.
arXiv Detail & Related papers (2024-01-15T23:08:19Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Towards Memory- and Time-Efficient Backpropagation for Training Spiking
Neural Networks [70.75043144299168]
Spiking Neural Networks (SNNs) are promising energy-efficient models for neuromorphic computing.
We propose the Spatial Learning Through Time (SLTT) method that can achieve high performance while greatly improving training efficiency.
Our method achieves state-of-the-art accuracy on ImageNet, while the memory cost and training time are reduced by more than 70% and 50%, respectively, compared with BPTT.
arXiv Detail & Related papers (2023-02-28T05:01:01Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Learning in Feedback-driven Recurrent Spiking Neural Networks using
full-FORCE Training [4.124948554183487]
We propose a supervised training procedure for RSNNs, where a second network is introduced only during the training.
The proposed training procedure consists of generating targets for both recurrent and readout layers.
We demonstrate the improved performance and noise robustness of the proposed full-FORCE training procedure to model 8 dynamical systems.
arXiv Detail & Related papers (2022-05-26T19:01:19Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Training Energy-Efficient Deep Spiking Neural Networks with
Time-to-First-Spike Coding [29.131030799324844]
Spiking neural networks (SNNs) mimic the operations in the human brain.
Deep neural networks (DNNs) have become a serious problem in deep learning.
This paper presents training methods for energy-efficient deep SNNs with TTFS coding.
arXiv Detail & Related papers (2021-06-04T16:02:27Z) - Gradient Descent on Neural Networks Typically Occurs at the Edge of
Stability [94.4070247697549]
Full-batch gradient descent on neural network training objectives operates in a regime we call the Edge of Stability.
In this regime, the maximum eigenvalue of the training loss Hessian hovers just above the numerical value $2 / text(step size)$, and the training loss behaves non-monotonically over short timescales, yet consistently decreases over long timescales.
arXiv Detail & Related papers (2021-02-26T22:08:19Z) - Revisiting Batch Normalization for Training Low-latency Deep Spiking
Neural Networks from Scratch [5.511606249429581]
Spiking Neural Networks (SNNs) have emerged as an alternative to deep learning.
High-accuracy and low-latency SNNs from scratch suffer from non-differentiable nature of a spiking neuron.
We propose a temporal Batch Normalization Through Time (BNTT) technique for training temporal SNNs.
arXiv Detail & Related papers (2020-10-05T00:49:30Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z) - T2FSNN: Deep Spiking Neural Networks with Time-to-first-spike Coding [26.654533157221973]
This paper introduces the concept of time-to-first-spike coding into deep SNNs using the kernel-based dynamic threshold and dendrite to overcome the drawback.
According to our results, the proposed methods can reduce inference latency and number of spikes to 22% and less than 1%, compared to those of burst coding.
arXiv Detail & Related papers (2020-03-26T04:39:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.