InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks
- URL: http://arxiv.org/abs/2307.04356v2
- Date: Fri, 18 Aug 2023 03:28:59 GMT
- Title: InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks
- Authors: Yufei Guo, Yuanpei Chen, Liwen Zhang, Xiaode Liu, Xinyi Tong, Yuanyuan
Ou, Xuhui Huang, Zhe Ma
- Abstract summary: Spiking Neural Network (SNN) adopts binary spike signals to transmit information.
We propose to use the "Soft Reset" mechanism for the supervised training-based SNNs.
We show that the SNNs with the "Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.
- Score: 26.670449517287594
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The Spiking Neural Network (SNN) has attracted more and more attention
recently. It adopts binary spike signals to transmit information. Benefitting
from the information passing paradigm of SNNs, the multiplications of
activations and weights can be replaced by additions, which are more
energy-efficient. However, its "Hard Reset" mechanism for the firing activity
would ignore the difference among membrane potentials when the membrane
potential is above the firing threshold, causing information loss. Meanwhile,
quantifying the membrane potential to 0/1 spikes at the firing instants will
inevitably introduce the quantization error thus bringing about information
loss too. To address these problems, we propose to use the "Soft Reset"
mechanism for the supervised training-based SNNs, which will drive the membrane
potential to a dynamic reset potential according to its magnitude, and Membrane
Potential Rectifier (MPR) to reduce the quantization error via redistributing
the membrane potential to a range close to the spikes. Results show that the
SNNs with the "Soft Reset" mechanism and MPR outperform their vanilla
counterparts on both static and dynamic datasets.
Related papers
- TBSN: Transformer-Based Blind-Spot Network for Self-Supervised Image Denoising [94.09442506816724]
Blind-spot networks (BSN) have been prevalent network architectures in self-supervised image denoising (SSID)
We present a transformer-based blind-spot network (TBSN) by analyzing and redesigning the transformer operators that meet the blind-spot requirement.
For spatial self-attention, an elaborate mask is applied to the attention matrix to restrict its receptive field, thus mimicking the dilated convolution.
For channel self-attention, we observe that it may leak the blind-spot information when the channel number is greater than spatial size in the deep layers of multi-scale architectures.
arXiv Detail & Related papers (2024-04-11T15:39:10Z) - Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks [19.304952813634994]
Spiking Neural Network (SNN) is a biologically inspired neural network infrastructures.
In this paper, we propose a ternary spike neuron to transmit information.
We show that the ternary spike can consistently outperform state-of-the-art methods.
arXiv Detail & Related papers (2023-12-11T13:28:54Z) - RMP-Loss: Regularizing Membrane Potential Distribution for Spiking
Neural Networks [26.003193122060697]
Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently.
We propose a regularizing membrane potential loss (RMP-Loss) to adjust the distribution which is directly related to quantization error to a range close to the spikes.
arXiv Detail & Related papers (2023-08-13T14:59:27Z) - NeuralFuse: Learning to Recover the Accuracy of Access-Limited Neural
Network Inference in Low-Voltage Regimes [52.51014498593644]
Deep neural networks (DNNs) have become ubiquitous in machine learning, but their energy consumption remains a notable issue.
We introduce NeuralFuse, a novel add-on module that addresses the accuracy-energy tradeoff in low-voltage regimes.
At a 1% bit error rate, NeuralFuse can reduce memory access energy by up to 24% while recovering accuracy by up to 57%.
arXiv Detail & Related papers (2023-06-29T11:38:22Z) - MSAT: Biologically Inspired Multi-Stage Adaptive Threshold for
Conversion of Spiking Neural Networks [11.392893261073594]
Spiking Neural Networks (SNNs) can do inference with low power consumption due to their spike sparsity.
ANN-SNN conversion is an efficient way to achieve deep SNNs by converting well-trained Artificial Neural Networks (ANNs)
Existing methods commonly use constant threshold for conversion, which prevents neurons from rapidly delivering spikes to deeper layers.
arXiv Detail & Related papers (2023-03-23T07:18:08Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Multi-Level Firing with Spiking DS-ResNet: Enabling Better and Deeper
Directly-Trained Spiking Neural Networks [19.490903216456758]
Spiking neural networks (SNNs) are neural networks with asynchronous discrete and sparse characteristics.
We propose a multi-level firing (MLF) method based on the existing spiking-suppressed residual network (spiking DS-ResNet)
arXiv Detail & Related papers (2022-10-12T16:39:46Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - Backpropagated Neighborhood Aggregation for Accurate Training of Spiking
Neural Networks [14.630838296680025]
We propose a novel BP-like method, called neighborhood aggregation (NA), which computes accurate error gradients guiding weight updates.
NA achieves this goal by aggregating finite differences of the loss over perturbed membrane potential waveforms in the neighborhood of the present membrane potential of each neuron.
Our experiments show that the proposed NA algorithm delivers the state-of-the-art performance for SNN training on several datasets.
arXiv Detail & Related papers (2021-06-22T16:42:48Z) - Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
Quantization on Depthwise Separable Convolutional Networks Through the Eyes
of Multi-scale Distributional Dynamics [93.4221402881609]
MobileNets are the go-to family of deep convolutional neural networks (CNN) for mobile.
They often have significant accuracy degradation under post-training quantization.
We study the multi-scale distributional dynamics of MobileNet-V1, a set of smaller DWSCNNs, and regular CNNs.
arXiv Detail & Related papers (2021-04-24T01:28:29Z) - Adversarial Attacks on Deep Learning Based Power Allocation in a Massive
MIMO Network [62.77129284830945]
We show that adversarial attacks can break DL-based power allocation in the downlink of a massive multiple-input-multiple-output (maMIMO) network.
We benchmark the performance of these attacks and show that with a small perturbation in the input of the neural network (NN), the white-box attacks can result in infeasible solutions up to 86%.
arXiv Detail & Related papers (2021-01-28T16:18:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.