An Exact Mapping From ReLU Networks to Spiking Neural Networks
- URL: http://arxiv.org/abs/2212.12522v1
- Date: Fri, 23 Dec 2022 18:31:09 GMT
- Title: An Exact Mapping From ReLU Networks to Spiking Neural Networks
- Authors: Ana Stanojevic, Stanis{\l}aw Wo\'zniak, Guillaume Bellec, Giovanni
Cherubini, Angeliki Pantazi, Wulfram Gerstner
- Abstract summary: We propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron.
More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
- Score: 3.1701886344065255
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep spiking neural networks (SNNs) offer the promise of low-power artificial
intelligence. However, training deep SNNs from scratch or converting deep
artificial neural networks to SNNs without loss of performance has been a
challenge. Here we propose an exact mapping from a network with Rectified
Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our
constructive proof, we assume that an arbitrary multi-layer ReLU network with
or without convolutional layers, batch normalization and max pooling layers was
trained to high performance on some training set. Furthermore, we assume that
we have access to a representative example of input data used during training
and to the exact parameters (weights and biases) of the trained ReLU network.
The mapping from deep ReLU networks to SNNs causes zero percent drop in
accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and
PASS. More generally our work shows that an arbitrary deep ReLU network can be
replaced by an energy-efficient single-spike neural network without any loss of
performance.
Related papers
- Investigating Sparsity in Recurrent Neural Networks [0.0]
This thesis focuses on investigating the effects of pruning and Sparse Recurrent Neural Networks on the performance of RNNs.
We first describe the pruning of RNNs, its impact on the performance of RNNs, and the number of training epochs required to regain accuracy after the pruning is performed.
Next, we continue with the creation and training of Sparse Recurrent Neural Networks and identify the relation between the performance and the graph properties of its underlying arbitrary structure.
arXiv Detail & Related papers (2024-07-30T07:24:58Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - You Can Have Better Graph Neural Networks by Not Training Weights at
All: Finding Untrained GNNs Tickets [105.24703398193843]
Untrainedworks in graph neural networks (GNNs) still remains mysterious.
We show that the found untrainedworks can substantially mitigate the GNN over-smoothing problem.
We also observe that such sparse untrainedworks have appealing performance in out-of-distribution detection and robustness of input perturbations.
arXiv Detail & Related papers (2022-11-28T14:17:36Z) - Desire Backpropagation: A Lightweight Training Algorithm for Multi-Layer
Spiking Neural Networks based on Spike-Timing-Dependent Plasticity [13.384228628766236]
Spiking neural networks (SNNs) are a viable alternative to conventional artificial neural networks.
We present desire backpropagation, a method to derive the desired spike activity of all neurons, including the hidden ones.
We trained three-layer networks to classify MNIST and Fashion-MNIST images and reached an accuracy of 98.41% and 87.56%, respectively.
arXiv Detail & Related papers (2022-11-10T08:32:13Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Ultra-low Latency Adaptive Local Binary Spiking Neural Network with
Accuracy Loss Estimator [4.554628904670269]
We propose an ultra-low latency adaptive local binary spiking neural network (ALBSNN) with accuracy loss estimators.
Experimental results show that this method can reduce storage space by more than 20 % without losing network accuracy.
arXiv Detail & Related papers (2022-07-31T09:03:57Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Enabling Deep Spiking Neural Networks with Hybrid Conversion and Spike
Timing Dependent Backpropagation [10.972663738092063]
Spiking Neural Networks (SNNs) operate with asynchronous discrete events (or spikes)
We present a computationally-efficient training technique for deep SNNs.
We achieve top-1 accuracy of 65.19% for ImageNet dataset on SNN with 250 time steps, which is 10X faster compared to converted SNNs with similar accuracy.
arXiv Detail & Related papers (2020-05-04T19:30:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.