Converting Artificial Neural Networks to Spiking Neural Networks via
Parameter Calibration
- URL: http://arxiv.org/abs/2205.10121v1
- Date: Fri, 6 May 2022 18:22:09 GMT
- Title: Converting Artificial Neural Networks to Spiking Neural Networks via
Parameter Calibration
- Authors: Yuhang Li, Shikuang Deng, Xin Dong, Shi Gu
- Abstract summary: Spiking Neural Network (SNN) is recognized as one of the next-generation neural networks.
In this work, we argue that simply copying and pasting the weights of ANN to SNN inevitably results in activation mismatch.
We propose a set of layer-wise parameter calibration algorithms, which adjusts the parameters to minimize the activation mismatch.
- Score: 21.117214351356765
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Spiking Neural Network (SNN), originating from the neural behavior in
biology, has been recognized as one of the next-generation neural networks.
Conventionally, SNNs can be obtained by converting from pre-trained Artificial
Neural Networks (ANNs) by replacing the non-linear activation with spiking
neurons without changing the parameters. In this work, we argue that simply
copying and pasting the weights of ANN to SNN inevitably results in activation
mismatch, especially for ANNs that are trained with batch normalization (BN)
layers. To tackle the activation mismatch issue, we first provide a theoretical
analysis by decomposing local conversion error to clipping error and flooring
error, and then quantitatively measure how this error propagates throughout the
layers using the second-order analysis. Motivated by the theoretical results,
we propose a set of layer-wise parameter calibration algorithms, which adjusts
the parameters to minimize the activation mismatch. Extensive experiments for
the proposed algorithms are performed on modern architectures and large-scale
tasks including ImageNet classification and MS COCO detection. We demonstrate
that our method can handle the SNN conversion with batch normalization layers
and effectively preserve the high accuracy even in 32 time steps. For example,
our calibration algorithms can increase up to 65% accuracy when converting
VGG-16 with BN layers.
Related papers
- LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - Variational Tensor Neural Networks for Deep Learning [0.0]
We propose an integration of tensor networks (TN) into deep neural networks (NNs)
This in turn, results in a scalable tensor neural network (TNN) architecture capable of efficient training over a large parameter space.
We validate the accuracy and efficiency of our method by designing TNN models and providing benchmark results for linear and non-linear regressions, data classification and image recognition on MNIST handwritten digits.
arXiv Detail & Related papers (2022-11-26T20:24:36Z) - Ultra-low Latency Adaptive Local Binary Spiking Neural Network with
Accuracy Loss Estimator [4.554628904670269]
We propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators.
Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy.
arXiv Detail & Related papers (2022-07-31T09:03:57Z) - Parameter Convex Neural Networks [13.42851919291587]
We propose the exponential multilayer neural network (EMLP) which is convex with regard to the parameters of the neural network under some conditions.
For late experiments, we use the same architecture to make the exponential graph convolutional network (EGCN) and do the experiment on the graph classificaion dataset.
arXiv Detail & Related papers (2022-06-11T16:44:59Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - A Free Lunch From ANN: Towards Efficient, Accurate Spiking Neural
Networks Calibration [11.014383784032084]
Spiking Neural Network (SNN) has been recognized as one of the next generation of neural networks.
We show that a proper way to calibrate the parameters during the conversion of ANN to SNN can bring significant improvements.
arXiv Detail & Related papers (2021-06-13T13:20:12Z) - LocalDrop: A Hybrid Regularization for Deep Neural Networks [98.30782118441158]
We propose a new approach for the regularization of neural networks by the local Rademacher complexity called LocalDrop.
A new regularization function for both fully-connected networks (FCNs) and convolutional neural networks (CNNs) has been developed based on the proposed upper bound of the local Rademacher complexity.
arXiv Detail & Related papers (2021-03-01T03:10:11Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.