Optimized Potential Initialization for Low-latency Spiking Neural
Networks
- URL: http://arxiv.org/abs/2202.01440v1
- Date: Thu, 3 Feb 2022 07:15:43 GMT
- Title: Optimized Potential Initialization for Low-latency Spiking Neural
Networks
- Authors: Tong Bu, Jianhao Ding, Zhaofei Yu, Tiejun Huang
- Abstract summary: Spiking Neural Networks (SNNs) have been attached great importance due to the distinctive properties of low power consumption, biological plausibility, and adversarial robustness.
The most effective way to train deep SNNs is through ANN-to-SNN conversion, which have yielded the best performance in deep network structure and large-scale datasets.
In this paper, we aim to achieve high-performance converted SNNs with extremely low latency (fewer than 32 time-steps)
- Score: 21.688402090967497
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have been attached great importance due to the
distinctive properties of low power consumption, biological plausibility, and
adversarial robustness. The most effective way to train deep SNNs is through
ANN-to-SNN conversion, which have yielded the best performance in deep network
structure and large-scale datasets. However, there is a trade-off between
accuracy and latency. In order to achieve high precision as original ANNs, a
long simulation time is needed to match the firing rate of a spiking neuron
with the activation value of an analog neuron, which impedes the practical
application of SNN. In this paper, we aim to achieve high-performance converted
SNNs with extremely low latency (fewer than 32 time-steps). We start by
theoretically analyzing ANN-to-SNN conversion and show that scaling the
thresholds does play a similar role as weight normalization. Instead of
introducing constraints that facilitate ANN-to-SNN conversion at the cost of
model capacity, we applied a more direct way by optimizing the initial membrane
potential to reduce the conversion loss in each layer. Besides, we demonstrate
that optimal initialization of membrane potentials can implement expected
error-free ANN-to-SNN conversion. We evaluate our algorithm on the CIFAR-10,
CIFAR-100 and ImageNet datasets and achieve state-of-the-art accuracy, using
fewer time-steps. For example, we reach top-1 accuracy of 93.38\% on CIFAR-10
with 16 time-steps. Moreover, our method can be applied to other ANN-SNN
conversion methodologies and remarkably promote performance when the time-steps
is small.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - When Bio-Inspired Computing meets Deep Learning: Low-Latency, Accurate,
& Energy-Efficient Spiking Neural Networks from Artificial Neural Networks [22.721987637571306]
Spiking Neural Networks (SNNs) are demonstrating comparable accuracy to convolutional neural networks (CNN)
ANN-to-SNN conversion has recently gained significant traction in developing deep SNNs with close to state-of-the-art (SOTA) test accuracy on complex image recognition tasks.
We propose a novel ANN-to-SNN conversion framework, that incurs an exponentially lower number of time steps compared to that required in the SOTA conversion approaches.
arXiv Detail & Related papers (2023-12-12T00:10:45Z) - Bridging the Gap between ANNs and SNNs by Calibrating Offset Spikes [19.85338979292052]
Spiking Neural Networks (SNNs) have attracted great attention due to their distinctive characteristics of low power consumption and temporal information processing.
ANN-SNN conversion, as the most commonly used training method for applying SNNs, can ensure that converted SNNs achieve comparable performance to ANNs on large-scale datasets.
In this paper, instead of evaluating different conversion errors and then eliminating these errors, we define an offset spike to measure the degree of deviation between actual and desired SNN firing rates.
arXiv Detail & Related papers (2023-02-21T14:10:56Z) - Reducing ANN-SNN Conversion Error through Residual Membrane Potential [19.85338979292052]
Spiking Neural Networks (SNNs) have received extensive academic attention due to the unique properties of low power consumption and high-speed computing on neuromorphic chips.
In this paper, we make a detailed analysis of unevenness error and divide it into four categories.
We propose an optimization strategy based on residual membrane potential to reduce unevenness error.
arXiv Detail & Related papers (2023-02-04T04:44:31Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Optimal ANN-SNN Conversion for Fast and Accurate Inference in Deep
Spiking Neural Networks [43.046402416604245]
Spiking Neural Networks (SNNs) are bio-inspired energy-efficient neural networks.
In this paper, we theoretically analyze ANN-SNN conversion and derive sufficient conditions of the optimal conversion.
We show that the proposed method achieves near loss less conversion with VGG-16, PreActResNet-18, and deeper structures.
arXiv Detail & Related papers (2021-05-25T04:15:06Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.