Membrane Potential Batch Normalization for Spiking Neural Networks
- URL: http://arxiv.org/abs/2308.08359v1
- Date: Wed, 16 Aug 2023 13:32:03 GMT
- Title: Membrane Potential Batch Normalization for Spiking Neural Networks
- Authors: Yufei Guo, Yuhan Zhang, Yuanpei Chen, Weihang Peng, Xiaode Liu, Liwen
Zhang, Xuhui Huang, Zhe Ma
- Abstract summary: spiking neural networks (SNNs) have gained more and more interest recently.
To train the deep models, some effective batch normalization (BN) techniques are proposed in SNNs.
We propose another BN layer before the firing function to normalize the membrane potential again, called MPBN.
- Score: 26.003193122060697
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As one of the energy-efficient alternatives of conventional neural networks
(CNNs), spiking neural networks (SNNs) have gained more and more interest
recently. To train the deep models, some effective batch normalization (BN)
techniques are proposed in SNNs. All these BNs are suggested to be used after
the convolution layer as usually doing in CNNs. However, the spiking neuron is
much more complex with the spatio-temporal dynamics. The regulated data flow
after the BN layer will be disturbed again by the membrane potential updating
operation before the firing function, i.e., the nonlinear activation.
Therefore, we advocate adding another BN layer before the firing function to
normalize the membrane potential again, called MPBN. To eliminate the induced
time cost of MPBN, we also propose a training-inference-decoupled
re-parameterization technique to fold the trained MPBN into the firing
threshold. With the re-parameterization technique, the MPBN will not introduce
any extra time burden in the inference. Furthermore, the MPBN can also adopt
the element-wised form, while these BNs after the convolution layer can only
use the channel-wised form. Experimental results show that the proposed MPBN
performs well on both popular non-spiking static and neuromorphic datasets. Our
code is open-sourced at \href{https://github.com/yfguo91/MPBN}{MPBN}.
Related papers
- Which Frequencies do CNNs Need? Emergent Bottleneck Structure in Feature
Learning [12.351756386062291]
We describe the emergence of a Convolution Bottleneck structure in CNNs.
We define the CBN rank, which describes the number and type of frequencies that are kept inside the bottleneck.
We show that any network with almost optimal parameter norm will exhibit a CBN structure in both the weights.
arXiv Detail & Related papers (2024-02-12T19:18:50Z) - An Adaptive Batch Normalization in Deep Learning [0.0]
Batch Normalization (BN) is a way to accelerate and stabilize training in deep convolutional neural networks.
We propose a threshold-based adaptive BN approach that separates the data that requires the BN and data that does not require it.
arXiv Detail & Related papers (2022-11-03T12:12:56Z) - Using the Projected Belief Network at High Dimensions [13.554038901140949]
The projected belief network (PBN) is a layered generative network (LGN) with tractable likelihood function.
We apply the discriminatively aligned PBN to classifying and auto-encoding high-dimensional spectrograms of acoustic events.
arXiv Detail & Related papers (2022-04-25T19:54:52Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - Batch Normalization Preconditioning for Neural Network Training [7.709342743709842]
Batch normalization (BN) is a popular and ubiquitous method in deep learning.
BN is not suitable for use with very small mini-batch sizes or online learning.
We propose a new method called Batch Normalization Preconditioning (BNP)
arXiv Detail & Related papers (2021-08-02T18:17:26Z) - "BNN - BN = ?": Training Binary Neural Networks without Batch
Normalization [92.23297927690149]
Batch normalization (BN) is a key facilitator and considered essential for state-of-the-art binary neural networks (BNN)
We extend their framework to training BNNs, and for the first time demonstrate that BNs can be completed removed from BNN training and inference regimes.
arXiv Detail & Related papers (2021-04-16T16:46:57Z) - Batch Normalization with Enhanced Linear Transformation [73.9885755599221]
properly enhancing a linear transformation module can effectively improve the ability of Batch normalization (BN)
Our method, named BNET, can be implemented with 2-3 lines of code in most deep learning libraries.
We verify that BNET accelerates the convergence of network training and enhances spatial information by assigning the important neurons with larger weights accordingly.
arXiv Detail & Related papers (2020-11-28T15:42:36Z) - MimicNorm: Weight Mean and Last BN Layer Mimic the Dynamic of Batch
Normalization [60.36100335878855]
We propose a novel normalization method, named MimicNorm, to improve the convergence and efficiency in network training.
We leverage the neural kernel (NTK) theory to prove that our weight mean operation whitens activations and transits network into the chaotic regime like BN layer.
MimicNorm achieves similar accuracy for various network structures, including ResNets and lightweight networks like ShuffleNet, with a reduction of about 20% memory consumption.
arXiv Detail & Related papers (2020-10-19T07:42:41Z) - PowerNorm: Rethinking Batch Normalization in Transformers [96.14956636022957]
normalization method for neural network (NN) models used in Natural Language Processing (NLP) is layer normalization (LN)
LN is preferred due to the empirical observation that a (naive/vanilla) use of BN leads to significant performance degradation for NLP tasks.
We propose Power Normalization (PN), a novel normalization scheme that resolves this issue.
arXiv Detail & Related papers (2020-03-17T17:50:26Z) - How Does BN Increase Collapsed Neural Network Filters? [34.886702335022015]
Filter collapse is common in deep neural networks (DNNs) with batch normalization (BN) and rectified linear activation functions (e.g. ReLU, Leaky ReLU)
We propose a simple yet effective approach named post-shifted BN (psBN), which has the same representation ability as BN while being able to automatically make BN parameters trainable again as they saturate during training.
arXiv Detail & Related papers (2020-01-30T09:00:08Z) - Towards Stabilizing Batch Statistics in Backward Propagation of Batch
Normalization [126.6252371899064]
Moving Average Batch Normalization (MABN) is a novel normalization method.
We show that MABN can completely restore the performance of vanilla BN in small batch cases.
Our experiments demonstrate the effectiveness of MABN in multiple computer vision tasks including ImageNet and COCO.
arXiv Detail & Related papers (2020-01-19T14:41:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.