Noise Adaptor in Spiking Neural Networks
- URL: http://arxiv.org/abs/2312.05290v1
- Date: Fri, 8 Dec 2023 16:57:01 GMT
- Title: Noise Adaptor in Spiking Neural Networks
- Authors: Chen Li, Bipin Rajendran
- Abstract summary: Low-latency spiking neural network (SNN) algorithms have drawn significant interest.
One of the most efficient ways to construct a low-latency SNN is by converting a pre-trained, low-bit artificial neural network (ANN) into an SNN.
converting SNNs from low-bit ANNs can lead to occasional noise" -- the phenomenon where occasional spikes are generated in spiking neurons where they should not be.
- Score: 4.568827262994048
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent strides in low-latency spiking neural network (SNN) algorithms have
drawn significant interest, particularly due to their event-driven computing
nature and fast inference capability. One of the most efficient ways to
construct a low-latency SNN is by converting a pre-trained, low-bit artificial
neural network (ANN) into an SNN. However, this conversion process faces two
main challenges: First, converting SNNs from low-bit ANNs can lead to
``occasional noise" -- the phenomenon where occasional spikes are generated in
spiking neurons where they should not be -- during inference, which
significantly lowers SNN accuracy. Second, although low-latency SNNs initially
show fast improvements in accuracy with time steps, these accuracy growths soon
plateau, resulting in their peak accuracy lagging behind both full-precision
ANNs and traditional ``long-latency SNNs'' that prioritize precision over
speed.
In response to these two challenges, this paper introduces a novel technique
named ``noise adaptor.'' Noise adaptor can model occasional noise during
training and implicitly optimize SNN accuracy, particularly at high simulation
times $T$. Our research utilizes the ResNet model for a comprehensive analysis
of the impact of the noise adaptor on low-latency SNNs. The results demonstrate
that our method outperforms the previously reported quant-ANN-to-SNN conversion
technique. We achieved an accuracy of 95.95\% within 4 time steps on CIFAR-10
using ResNet-18, and an accuracy of 74.37\% within 64 time steps on ImageNet
using ResNet-50. Remarkably, these results were obtained without resorting to
any noise correction methods during SNN inference, such as negative spikes or
two-stage SNN simulations. Our approach significantly boosts the peak accuracy
of low-latency SNNs, bringing them on par with the accuracy of full-precision
ANNs. Code will be open source.
Related papers
- Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - A noise based novel strategy for faster SNN training [0.0]
Spiking neural networks (SNNs) are receiving increasing attention due to their low power consumption and strong bio-plausibility.
Two main methods, artificial neural network (ANN)-to-SNN conversion and spike-based backpropagation (BP), both have their advantages and limitations.
We propose a novel SNN training approach that combines the benefits of the two methods.
arXiv Detail & Related papers (2022-11-10T09:59:04Z) - Ultra-low Latency Adaptive Local Binary Spiking Neural Network with
Accuracy Loss Estimator [4.554628904670269]
We propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators.
Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy.
arXiv Detail & Related papers (2022-07-31T09:03:57Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Optimized Potential Initialization for Low-latency Spiking Neural
Networks [21.688402090967497]
Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
arXiv Detail & Related papers (2022-02-03T07:15:43Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - FATNN: Fast and Accurate Ternary Neural Networks [89.07796377047619]
Ternary Neural Networks (TNNs) have received much attention due to being potentially orders of magnitude faster in inference, as well as more power efficient, than full-precision counterparts.
In this work, we show that, under some mild constraints, computational complexity of the ternary inner product can be reduced by a factor of 2.
We elaborately design an implementation-dependent ternary quantization algorithm to mitigate the performance gap.
arXiv Detail & Related papers (2020-08-12T04:26:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.