Ultra-low Latency Spiking Neural Networks with Spatio-Temporal
Compression and Synaptic Convolutional Block
- URL: http://arxiv.org/abs/2203.10006v1
- Date: Fri, 18 Mar 2022 15:14:13 GMT
- Title: Ultra-low Latency Spiking Neural Networks with Spatio-Temporal
Compression and Synaptic Convolutional Block
- Authors: Changqing Xu, Yi Liu, Yintang Yang
- Abstract summary: Spiking neural networks (SNNs) have neuro-temporal information capability, low processing feature, and high biological plausibility.
Neuro-MNIST, CIFAR10-S, DVS128 gesture datasets need to aggregate individual events into frames with a higher temporal resolution for event stream classification.
We propose a processing-temporal compression method to aggregate individual events into a few time steps of NIST current to reduce the training and inference latency.
- Score: 4.081968050250324
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs), as one of the brain-inspired models, has
spatio-temporal information processing capability, low power feature, and high
biological plausibility. The effective spatio-temporal feature makes it
suitable for event streams classification. However, neuromorphic datasets, such
as N-MNIST, CIFAR10-DVS, DVS128-gesture, need to aggregate individual events
into frames with a new higher temporal resolution for event stream
classification, which causes high training and inference latency. In this work,
we proposed a spatio-temporal compression method to aggregate individual events
into a few time steps of synaptic current to reduce the training and inference
latency. To keep the accuracy of SNNs under high compression ratios, we also
proposed a synaptic convolutional block to balance the dramatic change between
adjacent time steps. And multi-threshold Leaky Integrate-and-Fire (LIF) with
learnable membrane time constant is introduced to increase its information
processing capability. We evaluate the proposed method for event streams
classification tasks on neuromorphic N-MNIST, CIFAR10-DVS, DVS128 gesture
datasets. The experiment results show that our proposed method outperforms the
state-of-the-art accuracy on nearly all datasets, using fewer time steps.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Signal-SGN: A Spiking Graph Convolutional Network for Skeletal Action Recognition via Learning Temporal-Frequency Dynamics [2.9578022754506605]
In skeletal-based action recognition, Graph Convolutional Networks (GCNs) face limitations due to their complexity and high energy consumption.
We propose a Signal-SGN(Spiking Graph Convolutional Network), which leverages the temporal dimension of skeletal sequences as the spiking timestep.
Our experiments show that the proposed models not only surpass existing SNN-based methods in accuracy but also reduce computational storage costs during training.
arXiv Detail & Related papers (2024-08-03T07:47:16Z) - Continuous time recurrent neural networks: overview and application to
forecasting blood glucose in the intensive care unit [56.801856519460465]
Continuous time autoregressive recurrent neural networks (CTRNNs) are a deep learning model that account for irregular observations.
We demonstrate the application of these models to probabilistic forecasting of blood glucose in a critical care setting.
arXiv Detail & Related papers (2023-04-14T09:39:06Z) - STSC-SNN: Spatio-Temporal Synaptic Connection with Temporal Convolution
and Attention for Spiking Neural Networks [7.422913384086416]
Spiking Neural Networks (SNNs), as one of the algorithmic models in neuromorphic computing, have gained a great deal of research attention owing to temporal processing capability.
Existing synaptic structures in SNNs are almost full-connections or spatial 2D convolution, neither which can extract temporal dependencies adequately.
We take inspiration from biological synapses and propose a synaptic connection SNN model, to enhance the synapse-temporal receptive fields of synaptic connections.
We show that endowing synaptic models with temporal dependencies can improve the performance of SNNs on classification tasks.
arXiv Detail & Related papers (2022-10-11T08:13:22Z) - Online Training Through Time for Spiking Neural Networks [66.7744060103562]
Spiking neural networks (SNNs) are promising brain-inspired energy-efficient models.
Recent progress in training methods has enabled successful deep SNNs on large-scale tasks with low latency.
We propose online training through time (OTTT) for SNNs, which is derived from BPTT to enable forward-in-time learning.
arXiv Detail & Related papers (2022-10-09T07:47:56Z) - Deep inference of latent dynamics with spatio-temporal super-resolution
using selective backpropagation through time [15.648009434801885]
Modern neural interfaces allow access to the activity of up to a million neurons within brain circuits.
bandwidth limits often create a trade-off between greater spatial sampling (more channels or pixels) and frequency of temporal sampling.
Here we demonstrate that it is possible to obtain super-resolution in neuronal time series by exploiting relationships among neurons.
arXiv Detail & Related papers (2021-10-29T20:18:29Z) - Backpropagation with Biologically Plausible Spatio-Temporal Adjustment
For Training Deep Spiking Neural Networks [5.484391472233163]
The success of deep learning is inseparable from backpropagation.
We propose a biological plausible spatial adjustment, which rethinks the relationship between membrane potential and spikes.
Secondly, we propose a biologically plausible temporal adjustment making the error propagate across the spikes in the temporal dimension.
arXiv Detail & Related papers (2021-10-17T15:55:51Z) - Multi-Temporal Convolutions for Human Action Recognition in Videos [83.43682368129072]
We present a novel temporal-temporal convolution block that is capable of extracting at multiple resolutions.
The proposed blocks are lightweight and can be integrated into any 3D-CNN architecture.
arXiv Detail & Related papers (2020-11-08T10:40:26Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.