Spiking Network Initialisation and Firing Rate Collapse
- URL: http://arxiv.org/abs/2305.08879v1
- Date: Sat, 13 May 2023 10:11:00 GMT
- Title: Spiking Network Initialisation and Firing Rate Collapse
- Authors: Nicolas Perez-Nieves and Dan F.M Goodman
- Abstract summary: It is unclear what constitutes a good initialisation for a spiking neural network (SNN)
We show that the problem of weight initialisation for ANNs is a more nuanced problem than it is for ANNs due to the spike-and-reset non-linearity of SNNs.
We devise a general strategy for SNN initialisation which combines variance propagation techniques from ANNs and different methods to obtain the expected firing rate and membrane potential distribution.
- Score: 3.7057859167913456
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In recent years, newly developed methods to train spiking neural networks
(SNNs) have rendered them as a plausible alternative to Artificial Neural
Networks (ANNs) in terms of accuracy, while at the same time being much more
energy efficient at inference and potentially at training time. However, it is
still unclear what constitutes a good initialisation for an SNN. We often use
initialisation schemes developed for ANN training which are often inadequate
and require manual tuning. In this paper, we attempt to tackle this issue by
using techniques from the ANN initialisation literature as well as
computational neuroscience results. We show that the problem of weight
initialisation for ANNs is a more nuanced problem than it is for ANNs due to
the spike-and-reset non-linearity of SNNs and the firing rate collapse problem.
We firstly identify and propose several solutions to the firing rate collapse
problem under different sets of assumptions which successfully solve the issue
by leveraging classical random walk and Wiener processes results. Secondly, we
devise a general strategy for SNN initialisation which combines variance
propagation techniques from ANNs and different methods to obtain the expected
firing rate and membrane potential distribution based on diffusion and
shot-noise approximations. Altogether, we obtain theoretical results to solve
the SNN initialisation which consider the membrane potential distribution in
the presence of a threshold. Yet, to what extent can these methods be
successfully applied to SNNs on real datasets remains an open question.
Related papers
- Deep activity propagation via weight initialization in spiking neural networks [10.69085409825724]
Spiking Neural Networks (SNNs) offer bio-inspired advantages such as sparsity and ultra-low power consumption.
Deep SNNs process and transmit information by quantizing the real-valued membrane potentials into binary spikes.
We show theoretically that, unlike standard approaches, this method enables the propagation of activity in deep SNNs without loss of spikes.
arXiv Detail & Related papers (2024-10-01T11:02:34Z) - Converting High-Performance and Low-Latency SNNs through Explicit Modelling of Residual Error in ANNs [27.46147049872907]
Spiking neural networks (SNNs) have garnered interest due to their energy efficiency and superior effectiveness on neuromorphic chips.
One of the mainstream approaches to implementing deep SNNs is the ANN-SNN conversion.
We propose a new approach based on explicit modeling of residual errors as additive noise.
arXiv Detail & Related papers (2024-04-26T14:50:46Z) - Bayesian Neural Networks with Domain Knowledge Priors [52.80929437592308]
We propose a framework for integrating general forms of domain knowledge into a BNN prior.
We show that BNNs using our proposed domain knowledge priors outperform those with standard priors.
arXiv Detail & Related papers (2024-02-20T22:34:53Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Fluctuation-driven initialization for spiking neural network training [3.976291254896486]
Spiking neural networks (SNNs) underlie low-power, fault-tolerant information processing in the brain.
We develop a general strategy for SNNs inspired by the fluctuation-driven regime commonly observed in the brain.
arXiv Detail & Related papers (2022-06-21T09:48:49Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Spatial-Temporal-Fusion BNN: Variational Bayesian Feature Layer [77.78479877473899]
We design a spatial-temporal-fusion BNN for efficiently scaling BNNs to large models.
Compared to vanilla BNNs, our approach can greatly reduce the training time and the number of parameters, which contributes to scale BNNs efficiently.
arXiv Detail & Related papers (2021-12-12T17:13:14Z) - Direct Training via Backpropagation for Ultra-low Latency Spiking Neural
Networks with Multi-threshold [3.286515597773624]
Spiking neural networks (SNNs) can utilizetemporal information and have a nature of energy efficiency.
We propose a novel training method based on backpropagation (BP) for ultra-low latency(1-2 timethreshold) SNN with multi-threshold model.
Our proposed method achieves an average accuracy of 99.56%, 93.08%, and 87.90% on MNIST, FashionMNIST, and CIFAR10, respectively with only 2 time steps.
arXiv Detail & Related papers (2021-11-25T07:04:28Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.