Shrinking Your TimeStep: Towards Low-Latency Neuromorphic Object
Recognition with Spiking Neural Network
- URL: http://arxiv.org/abs/2401.01912v1
- Date: Tue, 2 Jan 2024 02:05:05 GMT
- Title: Shrinking Your TimeStep: Towards Low-Latency Neuromorphic Object
Recognition with Spiking Neural Network
- Authors: Yongqi Ding, Lin Zuo, Mengmeng Jing, Pei He and Yongjun Xiao
- Abstract summary: Neuromorphic object recognition with spiking neural networks (SNNs) is the cornerstone of low-power neuromorphic computing.
Existing SNNs suffer from significant latency, utilizing 10 to 40 timesteps or more, to recognize neuromorphic objects.
In this work, we propose the Shrinking SNN (SSNN) to achieve low-latency neuromorphic object recognition without reducing performance.
- Score: 5.174808367448261
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Neuromorphic object recognition with spiking neural networks (SNNs) is the
cornerstone of low-power neuromorphic computing. However, existing SNNs suffer
from significant latency, utilizing 10 to 40 timesteps or more, to recognize
neuromorphic objects. At low latencies, the performance of existing SNNs is
drastically degraded. In this work, we propose the Shrinking SNN (SSNN) to
achieve low-latency neuromorphic object recognition without reducing
performance. Concretely, we alleviate the temporal redundancy in SNNs by
dividing SNNs into multiple stages with progressively shrinking timesteps,
which significantly reduces the inference latency. During timestep shrinkage,
the temporal transformer smoothly transforms the temporal scale and preserves
the information maximally. Moreover, we add multiple early classifiers to the
SNN during training to mitigate the mismatch between the surrogate gradient and
the true gradient, as well as the gradient vanishing/exploding, thus
eliminating the performance degradation at low latency. Extensive experiments
on neuromorphic datasets, CIFAR10-DVS, N-Caltech101, and DVS-Gesture have
revealed that SSNN is able to improve the baseline accuracy by 6.55% ~ 21.41%.
With only 5 average timesteps and without any data augmentation, SSNN is able
to achieve an accuracy of 73.63% on CIFAR10-DVS. This work presents a
heterogeneous temporal scale SNN and provides valuable insights into the
development of high-performance, low-latency SNNs.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.