Hoyer regularizer is all you need for ultra low-latency spiking neural
networks
- URL: http://arxiv.org/abs/2212.10170v1
- Date: Tue, 20 Dec 2022 11:16:06 GMT
- Title: Hoyer regularizer is all you need for ultra low-latency spiking neural
networks
- Authors: Gourav Datta, Zeyu Liu, Peter A. Beerel
- Abstract summary: Spiking Neural networks (SNN) have emerged as an attractive Hoyer-temporal computing paradigm for a wide range of low-power vision tasks.
We present a training framework (from scratch) for one-time SNNs that uses a novel variant of the recently proposed regularizer.
Our approach outperforms existing spiking, binary, and adder neural networks in terms of the accuracy-FLOPs trade-off for complex image recognition tasks.
- Score: 4.243356707599485
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural networks (SNN) have emerged as an attractive spatio-temporal
computing paradigm for a wide range of low-power vision tasks. However,
state-of-the-art (SOTA) SNN models either incur multiple time steps which
hinder their deployment in real-time use cases or increase the training
complexity significantly. To mitigate this concern, we present a training
framework (from scratch) for one-time-step SNNs that uses a novel variant of
the recently proposed Hoyer regularizer. We estimate the threshold of each SNN
layer as the Hoyer extremum of a clipped version of its activation map, where
the clipping threshold is trained using gradient descent with our Hoyer
regularizer. This approach not only downscales the value of the trainable
threshold, thereby emitting a large number of spikes for weight update with a
limited number of iterations (due to only one time step) but also shifts the
membrane potential values away from the threshold, thereby mitigating the
effect of noise that can degrade the SNN accuracy. Our approach outperforms
existing spiking, binary, and adder neural networks in terms of the
accuracy-FLOPs trade-off for complex image recognition tasks. Downstream
experiments on object detection also demonstrate the efficacy of our approach.
Related papers
- Low Latency of object detection for spikng neural network [3.404826786562694]
Spiking Neural Networks are well-suited for edge AI applications due to their binary spike nature.
In this paper, we focus on generating highly accurate and low-latency SNNs specifically for object detection.
arXiv Detail & Related papers (2023-09-27T10:26:19Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Fast-SNN: Fast Spiking Neural Network by Converting Quantized ANN [38.18008827711246]
Spiking neural networks (SNNs) have shown advantages in computation and energy efficiency.
It remains a challenge to train deep SNNs due to the discrete spike function.
This paper proposes Fast-SNN that achieves high performance with low latency.
arXiv Detail & Related papers (2023-05-31T14:04:41Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Navigating Local Minima in Quantized Spiking Neural Networks [3.1351527202068445]
Spiking and Quantized Neural Networks (NNs) are becoming exceedingly important for hyper-efficient implementations of Deep Learning (DL) algorithms.
These networks face challenges when trained using error backpropagation, due to the absence of gradient signals when applying hard thresholds.
This paper presents a systematic evaluation of a cosine-annealed LR schedule coupled with weight-independent adaptive moment estimation.
arXiv Detail & Related papers (2022-02-15T06:42:25Z) - ActNN: Reducing Training Memory Footprint via 2-Bit Activation
Compressed Training [68.63354877166756]
ActNN is a memory-efficient training framework that stores randomly quantized activations for back propagation.
ActNN reduces the memory footprint of the activation by 12x, and it enables training with a 6.6x to 14x larger batch size.
arXiv Detail & Related papers (2021-04-29T05:50:54Z) - Low-activity supervised convolutional spiking neural networks applied to
speech commands recognition [6.6389732792316005]
Spiking Neural Networks (SNNs) can be trained efficiently in a supervised manner.
We show that a model comprised of stacked dilated convolution spiking layers can reach an error rate very close to standard Deep Neural Networks (DNNs)
We also show that modeling the leakage of the neuron membrane potential is useful, since the LIF model outperformed its non-leaky model counterpart significantly.
arXiv Detail & Related papers (2020-11-13T10:29:35Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.