SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks
- URL: http://arxiv.org/abs/2312.17216v1
- Date: Thu, 28 Dec 2023 18:48:10 GMT
- Title: SparseProp: Efficient Event-Based Simulation and Training of Sparse
Recurrent Spiking Neural Networks
- Authors: Rainer Engelken
- Abstract summary: Spiking Neural Networks (SNNs) are biologically-inspired models that are capable of processing information in streams of action potentials.
We introduce SparseProp, a novel event-based algorithm for simulating and training sparse SNNs.
- Score: 4.532517021515834
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are biologically-inspired models that are
capable of processing information in streams of action potentials. However,
simulating and training SNNs is computationally expensive due to the need to
solve large systems of coupled differential equations. In this paper, we
introduce SparseProp, a novel event-based algorithm for simulating and training
sparse SNNs. Our algorithm reduces the computational cost of both the forward
and backward pass operations from O(N) to O(log(N)) per network spike, thereby
enabling numerically exact simulations of large spiking networks and their
efficient training using backpropagation through time. By leveraging the
sparsity of the network, SparseProp eliminates the need to iterate through all
neurons at each spike, employing efficient state updates instead. We
demonstrate the efficacy of SparseProp across several classical
integrate-and-fire neuron models, including a simulation of a sparse SNN with
one million LIF neurons. This results in a speed-up exceeding four orders of
magnitude relative to previous event-based implementations. Our work provides
an efficient and exact solution for training large-scale spiking neural
networks and opens up new possibilities for building more sophisticated
brain-inspired models.
Related papers
- Sparse Spiking Neural Network: Exploiting Heterogeneity in Timescales
for Pruning Recurrent SNN [19.551319330414085]
Spiking Neural Networks (RSNNs) have emerged as a computationally efficient and brain-inspired learning model.
Traditionally, sparse SNNs are obtained by first training a dense and complex SNN for a target task.
This paper presents a task-agnostic methodology for designing sparse RSNNs by pruning a large randomly model.
arXiv Detail & Related papers (2024-03-06T02:36:15Z) - Accelerating SNN Training with Stochastic Parallelizable Spiking Neurons [1.7056768055368383]
Spiking neural networks (SNN) are able to learn features while using less energy, especially on neuromorphic hardware.
Most widely used neuron in deep learning is the temporal and Fire (LIF) neuron.
arXiv Detail & Related papers (2023-06-22T04:25:27Z) - SA-CNN: Application to text categorization issues using simulated
annealing-based convolutional neural network optimization [0.0]
Convolutional neural networks (CNNs) are a representative class of deep learning algorithms.
We introduce SA-CNN neural networks for text classification tasks based on Text-CNN neural networks.
arXiv Detail & Related papers (2023-03-13T14:27:34Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer
Spiking Neural Networks based on Spike-Timing-Dependent Plasticity [13.384228628766236]
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks.
We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones.
We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively.
arXiv Detail & Related papers (2022-11-10T08:32:13Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - A quantum algorithm for training wide and deep classical neural networks [72.2614468437919]
We show that conditions amenable to classical trainability via gradient descent coincide with those necessary for efficiently solving quantum linear systems.
We numerically demonstrate that the MNIST image dataset satisfies such conditions.
We provide empirical evidence for $O(log n)$ training of a convolutional neural network with pooling.
arXiv Detail & Related papers (2021-07-19T23:41:03Z) - A Meta-Learning Approach to the Optimal Power Flow Problem Under
Topology Reconfigurations [69.73803123972297]
We propose a DNN-based OPF predictor that is trained using a meta-learning (MTL) approach.
The developed OPF-predictor is validated through simulations using benchmark IEEE bus systems.
arXiv Detail & Related papers (2020-12-21T17:39:51Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.