LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding
- URL: http://arxiv.org/abs/2310.14978v1
- Date: Mon, 23 Oct 2023 14:26:16 GMT
- Title: LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding
- Authors: Qu Yang, Malu Zhang, Jibin Wu, Kay Chen Tan, and Haizhou Li
- Abstract summary: We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
- Score: 55.64533786293656
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: The biological neurons use precise spike times, in addition to the spike
firing rate, to communicate with each other. The time-to-first-spike (TTFS)
coding is inspired by such biological observation. However, there is a lack of
effective solutions for training TTFS-based spiking neural network (SNN). In
this paper, we put forward a simple yet effective network conversion algorithm,
which is referred to as LC-TTFS, by addressing two main problems that hinder an
effective conversion from a high-performance artificial neural network (ANN) to
a TTFS-based SNN. We show that our algorithm can achieve a near-perfect mapping
between the activation values of an ANN and the spike times of an SNN on a
number of challenging AI tasks, including image classification, image
reconstruction, and speech enhancement. With TTFS coding, we can achieve up to
orders of magnitude saving in computation over ANN and other rate-based SNNs.
The study, therefore, paves the way for deploying ultra-low-power TTFS-based
SNNs on power-constrained edge computing platforms.
Related papers
- High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Exploiting Low-Rank Tensor-Train Deep Neural Networks Based on
Riemannian Gradient Descent With Illustrations of Speech Processing [74.31472195046099]
We exploit a low-rank tensor-train deep neural network (TT-DNN) to build an end-to-end deep learning pipeline, namely LR-TT-DNN.
A hybrid model combining LR-TT-DNN with a convolutional neural network (CNN) is set up to boost the performance.
Our empirical evidence demonstrates that the LR-TT-DNN and CNN+(LR-TT-DNN) models with fewer model parameters can outperform the TT-DNN and CNN+(LR-TT-DNN) counterparts.
arXiv Detail & Related papers (2022-03-11T15:55:34Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Training Energy-Efficient Deep Spiking Neural Networks with
Time-to-First-Spike Coding [29.131030799324844]
Spiking neural networks (SNNs) mimic the operations in the human brain.
Deep neural networks (DNNs) have become a serious problem in deep learning.
This paper presents training methods for energy-efficient deep SNNs with TTFS coding.
arXiv Detail & Related papers (2021-06-04T16:02:27Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.