RMP-Loss: Regularizing Membrane Potential Distribution for Spiking
Neural Networks
- URL: http://arxiv.org/abs/2308.06787v1
- Date: Sun, 13 Aug 2023 14:59:27 GMT
- Title: RMP-Loss: Regularizing Membrane Potential Distribution for Spiking
Neural Networks
- Authors: Yufei Guo, Xiaode Liu, Yuanpei Chen, Liwen Zhang, Weihang Peng, Yuhan
Zhang, Xuhui Huang, Zhe Ma
- Abstract summary: Spiking Neural Networks (SNNs) as one of the biology-inspired models have received much attention recently.
We propose a regularizing membrane potential loss (RMP-Loss) to adjust the distribution which is directly related to quantization error to a range close to the spikes.
- Score: 26.003193122060697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking Neural Networks (SNNs) as one of the biology-inspired models have
received much attention recently. It can significantly reduce energy
consumption since they quantize the real-valued membrane potentials to 0/1
spikes to transmit information thus the multiplications of activations and
weights can be replaced by additions when implemented on hardware. However,
this quantization mechanism will inevitably introduce quantization error, thus
causing catastrophic information loss. To address the quantization error
problem, we propose a regularizing membrane potential loss (RMP-Loss) to adjust
the distribution which is directly related to quantization error to a range
close to the spikes. Our method is extremely simple to implement and
straightforward to train an SNN. Furthermore, it is shown to consistently
outperform previous state-of-the-art methods over different network
architectures and datasets.
Related papers
- Deep Neural Networks Tend To Extrapolate Predictably [51.303814412294514]
neural network predictions tend to be unpredictable and overconfident when faced with out-of-distribution (OOD) inputs.
We observe that neural network predictions often tend towards a constant value as input data becomes increasingly OOD.
We show how one can leverage our insights in practice to enable risk-sensitive decision-making in the presence of OOD inputs.
arXiv Detail & Related papers (2023-10-02T03:25:32Z) - SPFQ: A Stochastic Algorithm and Its Error Analysis for Neural Network
Quantization [5.982922468400901]
We show that it is possible to achieve error bounds equivalent to that obtained in the order of the weights of a neural layer.
We prove that it is possible to achieve full-network bounds under an infinite alphabet and minimal assumptions on the input data.
arXiv Detail & Related papers (2023-09-20T00:35:16Z) - InfLoR-SNN: Reducing Information Loss for Spiking Neural Networks [26.670449517287594]
Spiking Neural Network (SNN) adopts binary spike signals to transmit information.
We propose to use the "Soft Reset" mechanism for the supervised training-based SNNs.
We show that the SNNs with the "Soft Reset" mechanism and MPR outperform their vanilla counterparts on both static and dynamic datasets.
arXiv Detail & Related papers (2023-07-10T05:49:20Z) - CoNLoCNN: Exploiting Correlation and Non-Uniform Quantization for
Energy-Efficient Low-precision Deep Convolutional Neural Networks [13.520972975766313]
We propose a framework to enable energy-efficient low-precision deep convolutional neural network inference by exploiting non-uniform quantization of weights.
We also propose a novel data representation format, Encoded Low-Precision Binary Signed Digit, to compress the bit-width of weights.
arXiv Detail & Related papers (2022-07-31T01:34:56Z) - BiTAT: Neural Network Binarization with Task-dependent Aggregated
Transformation [116.26521375592759]
Quantization aims to transform high-precision weights and activations of a given neural network into low-precision weights/activations for reduced memory usage and computation.
Extreme quantization (1-bit weight/1-bit activations) of compactly-designed backbone architectures results in severe performance degeneration.
This paper proposes a novel Quantization-Aware Training (QAT) method that can effectively alleviate performance degeneration.
arXiv Detail & Related papers (2022-07-04T13:25:49Z) - Post-training Quantization for Neural Networks with Provable Guarantees [9.58246628652846]
We modify a post-training neural-network quantization method, GPFQ, that is based on a greedy path-following mechanism.
We prove that for quantizing a single-layer network, the relative square error essentially decays linearly in the number of weights.
arXiv Detail & Related papers (2022-01-26T18:47:38Z) - Cluster-Promoting Quantization with Bit-Drop for Minimizing Network
Quantization Loss [61.26793005355441]
Cluster-Promoting Quantization (CPQ) finds the optimal quantization grids for neural networks.
DropBits is a new bit-drop technique that revises the standard dropout regularization to randomly drop bits instead of neurons.
We experimentally validate our method on various benchmark datasets and network architectures.
arXiv Detail & Related papers (2021-09-05T15:15:07Z) - SignalNet: A Low Resolution Sinusoid Decomposition and Estimation
Network [79.04274563889548]
We propose SignalNet, a neural network architecture that detects the number of sinusoids and estimates their parameters from quantized in-phase and quadrature samples.
We introduce a worst-case learning threshold for comparing the results of our network relative to the underlying data distributions.
In simulation, we find that our algorithm is always able to surpass the threshold for three-bit data but often cannot exceed the threshold for one-bit data.
arXiv Detail & Related papers (2021-06-10T04:21:20Z) - Do All MobileNets Quantize Poorly? Gaining Insights into the Effect of
Quantization on Depthwise Separable Convolutional Networks Through the Eyes
of Multi-scale Distributional Dynamics [93.4221402881609]
MobileNets are the go-to family of deep convolutional neural networks (CNN) for mobile.
They often have significant accuracy degradation under post-training quantization.
We study the multi-scale distributional dynamics of MobileNet-V1, a set of smaller DWSCNNs, and regular CNNs.
arXiv Detail & Related papers (2021-04-24T01:28:29Z) - An Introduction to Robust Graph Convolutional Networks [71.68610791161355]
We propose a novel Robust Graph Convolutional Neural Networks for possible erroneous single-view or multi-view data.
By incorporating an extra layers via Autoencoders into traditional graph convolutional networks, we characterize and handle typical error models explicitly.
arXiv Detail & Related papers (2021-03-27T04:47:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.