Spiking Approximations of the MaxPooling Operation in Deep SNNs
- URL: http://arxiv.org/abs/2205.07076v1
- Date: Sat, 14 May 2022 14:47:10 GMT
- Title: Spiking Approximations of the MaxPooling Operation in Deep SNNs
- Authors: Ramashish Gaurav, Bryan Tripp, Apurva Narayan
- Abstract summary: Spiking Neural Networks (SNNs) are an emerging domain of biologically inspired neural networks.
We present two hardware-friendly methods to implement Max-Pooling in deep SNNs.
In a first, we also execute SNNs with spiking-MaxPooling layers on Intel's Loihi neuromorphic hardware.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are an emerging domain of biologically
inspired neural networks that have shown promise for low-power AI. A number of
methods exist for building deep SNNs, with Artificial Neural Network
(ANN)-to-SNN conversion being highly successful. MaxPooling layers in
Convolutional Neural Networks (CNNs) are an integral component to downsample
the intermediate feature maps and introduce translational invariance, but the
absence of their hardware-friendly spiking equivalents limits such CNNs'
conversion to deep SNNs. In this paper, we present two hardware-friendly
methods to implement Max-Pooling in deep SNNs, thus facilitating easy
conversion of CNNs with MaxPooling layers to SNNs. In a first, we also execute
SNNs with spiking-MaxPooling layers on Intel's Loihi neuromorphic hardware
(with MNIST, FMNIST, & CIFAR10 dataset); thus, showing the feasibility of our
approach.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - SNN2ANN: A Fast and Memory-Efficient Training Framework for Spiking
Neural Networks [117.56823277328803]
Spiking neural networks are efficient computation models for low-power environments.
We propose a SNN-to-ANN (SNN2ANN) framework to train the SNN in a fast and memory-efficient way.
Experiment results show that our SNN2ANN-based models perform well on the benchmark datasets.
arXiv Detail & Related papers (2022-06-19T16:52:56Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Deep Learning in Spiking Phasor Neural Networks [0.6767885381740952]
Spiking Neural Networks (SNNs) have attracted the attention of the deep learning community for use in low-latency, low-power neuromorphic hardware.
In this paper, we introduce Spiking Phasor Neural Networks (SPNNs)
SPNNs are based on complex-valued Deep Neural Networks (DNNs), representing phases by spike times.
arXiv Detail & Related papers (2022-04-01T15:06:15Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Beyond Classification: Directly Training Spiking Neural Networks for
Semantic Segmentation [5.800785186389827]
Spiking Neural Networks (SNNs) have emerged as the low-power alternative to Artificial Neural Networks (ANNs)
In this paper, we explore the SNN applications beyond classification and present semantic segmentation networks configured with spiking neurons.
arXiv Detail & Related papers (2021-10-14T21:53:03Z) - Spiking neural networks trained via proxy [0.696125353550498]
We propose a new learning algorithm to train spiking neural networks (SNN) using conventional artificial neural networks (ANN) as proxy.
We couple two SNN and ANN networks, respectively, made of integrate-and-fire (IF) and ReLU neurons with the same network architectures and shared synaptic weights.
By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN.
arXiv Detail & Related papers (2021-09-27T17:29:51Z) - Pruning of Deep Spiking Neural Networks through Gradient Rewiring [41.64961999525415]
Spiking Neural Networks (SNNs) have been attached great importance due to their biological plausibility and high energy-efficiency on neuromorphic chips.
Most existing methods directly apply pruning approaches in artificial neural networks (ANNs) to SNNs, which ignore the difference between ANNs and SNNs.
We propose gradient rewiring (Grad R), a joint learning algorithm of connectivity and weight for SNNs, that enables us to seamlessly optimize network structure without retrain.
arXiv Detail & Related papers (2021-05-11T10:05:53Z) - Optimal Conversion of Conventional Artificial Neural Networks to Spiking
Neural Networks [0.0]
Spiking neural networks (SNNs) are biology-inspired artificial neural networks (ANNs)
We propose a novel strategic pipeline that transfers the weights to the target SNN by combining threshold balance and soft-reset mechanisms.
Our method is promising to get implanted onto embedded platforms with better support of SNNs with limited energy and memory.
arXiv Detail & Related papers (2021-02-28T12:04:22Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.