Rethinking the role of normalization and residual blocks for spiking
neural networks
- URL: http://arxiv.org/abs/2203.01544v1
- Date: Thu, 3 Mar 2022 07:13:39 GMT
- Title: Rethinking the role of normalization and residual blocks for spiking
neural networks
- Authors: Shin-ichi Ikegawa, Ryuji Saiin, Yoshihide Sawada, Naotake Natori
- Abstract summary: spiking neural networks (SNNs) are widely used to realize ultralow-power energy consumption.
Deep SNNs are not easy to train due to the excessive firing of spiking neurons in the hidden layers.
We propose a novel but simple normalization technique called postsynaptic potential normalization.
- Score: 1.0386786451091783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Biologically inspired spiking neural networks (SNNs) are widely used to
realize ultralow-power energy consumption. However, deep SNNs are not easy to
train due to the excessive firing of spiking neurons in the hidden layers. To
tackle this problem, we propose a novel but simple normalization technique
called postsynaptic potential normalization. This normalization removes the
subtraction term from the standard normalization and uses the second raw moment
instead of the variance as the division term. The spike firing can be
controlled, enabling the training to proceed appropriating, by conducting this
simple normalization to the postsynaptic potential. The experimental results
show that SNNs with our normalization outperformed other models using other
normalizations. Furthermore, through the pre-activation residual blocks, the
proposed model can train with more than 100 layers without other special
techniques dedicated to SNNs.
Related papers
- Unsupervised Adaptive Normalization [0.07499722271664146]
Unsupervised Adaptive Normalization (UAN) is an innovative algorithm that seamlessly integrates clustering for normalization with deep neural network learning.
UAN outperforms the classical methods by adapting to the target task and is effective in classification, and domain adaptation.
arXiv Detail & Related papers (2024-09-07T08:14:11Z) - Globally Optimal Training of Neural Networks with Threshold Activation
Functions [63.03759813952481]
We study weight decay regularized training problems of deep neural networks with threshold activations.
We derive a simplified convex optimization formulation when the dataset can be shattered at a certain layer of the network.
arXiv Detail & Related papers (2023-03-06T18:59:13Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Training Thinner and Deeper Neural Networks: Jumpstart Regularization [2.8348950186890467]
We use regularization to prevent neurons from dying or becoming linear.
In comparison to conventional training, we obtain neural networks that are thinner, deeper, and - most importantly - more parameter-efficient.
arXiv Detail & Related papers (2022-01-30T12:11:24Z) - Improving Surrogate Gradient Learning in Spiking Neural Networks via
Regularization and Normalization [0.0]
Spiking neural networks (SNNs) are different from the classical networks used in deep learning.
SNNs are appealing for AI technology, because they could be implemented on low power neuromorphic chips.
arXiv Detail & Related papers (2021-12-13T15:24:33Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Correct Normalization Matters: Understanding the Effect of Normalization
On Deep Neural Network Models For Click-Through Rate Prediction [3.201333208812837]
We propose a new and effective normalization approaches based on LayerNorm named variance only LayerNorm(VO-LN) in this work.
We find that the variance of normalization plays the main role and give an explanation in this work.
arXiv Detail & Related papers (2020-06-23T04:35:22Z) - Optimization Theory for ReLU Neural Networks Trained with Normalization
Layers [82.61117235807606]
The success of deep neural networks in part due to the use of normalization layers.
Our analysis shows how the introduction of normalization changes the landscape and can enable faster activation.
arXiv Detail & Related papers (2020-06-11T23:55:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.