SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for
Benchmarking Spiking Neural Networks
- URL: http://arxiv.org/abs/2210.12899v1
- Date: Mon, 24 Oct 2022 01:07:17 GMT
- Title: SpikeSim: An end-to-end Compute-in-Memory Hardware Evaluation Tool for
Benchmarking Spiking Neural Networks
- Authors: Abhishek Moitra, Abhiroop Bhattacharjee, Runcong Kuang, Gokul
Krishnan, Yu Cao, and Priyadarshini Panda
- Abstract summary: SpikeSim is a tool that can perform realistic performance, energy, latency and area evaluation of IMC-mapped SNNs.
We propose SNN topological modifications leading to 1.24x and 10x reduction in the neuronal module's area and the overall energy-delay-product value.
- Score: 4.0300632886917
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: SNNs are an active research domain towards energy efficient machine
intelligence. Compared to conventional ANNs, SNNs use temporal spike data and
bio-plausible neuronal activation functions such as Leaky-Integrate
Fire/Integrate Fire (LIF/IF) for data processing. However, SNNs incur
significant dot-product operations causing high memory and computation overhead
in standard von-Neumann computing platforms. Today, In-Memory Computing (IMC)
architectures have been proposed to alleviate the "memory-wall bottleneck"
prevalent in von-Neumann architectures. Although recent works have proposed
IMC-based SNN hardware accelerators, the following have been overlooked- 1) the
adverse effects of crossbar non-ideality on SNN performance due to repeated
analog dot-product operations over multiple time-steps, 2) hardware overheads
of essential SNN-specific components such as the LIF/IF and data communication
modules. To this end, we propose SpikeSim, a tool that can perform realistic
performance, energy, latency and area evaluation of IMC-mapped SNNs. SpikeSim
consists of a practical monolithic IMC architecture called SpikeFlow for
mapping SNNs. Additionally, the non-ideality computation engine (NICE) and
energy-latency-area (ELA) engine performs hardware-realistic evaluation of
SpikeFlow-mapped SNNs. Based on 65nm CMOS implementation and experiments on
CIFAR10, CIFAR100 and TinyImagenet datasets, we find that the LIF/IF neuronal
module has significant area contribution (>11% of the total hardware area). We
propose SNN topological modifications leading to 1.24x and 10x reduction in the
neuronal module's area and the overall energy-delay-product value,
respectively. Furthermore, in this work, we perform a holistic comparison
between IMC implemented ANN and SNNs and conclude that lower number of
time-steps are the key to achieve higher throughput and energy-efficiency for
SNNs compared to 4-bit ANNs.
Related papers
- Scalable Mechanistic Neural Networks [52.28945097811129]
We propose an enhanced neural network framework designed for scientific machine learning applications involving long temporal sequences.
By reformulating the original Mechanistic Neural Network (MNN) we reduce the computational time and space complexities from cubic and quadratic with respect to the sequence length, respectively, to linear.
Extensive experiments demonstrate that S-MNN matches the original MNN in precision while substantially reducing computational resources.
arXiv Detail & Related papers (2024-10-08T14:27:28Z) - Accurate Mapping of RNNs on Neuromorphic Hardware with Adaptive Spiking Neurons [2.9410174624086025]
We present a $SigmaDelta$-low-pass RNN (lpRNN) for mapping rate-based RNNs to spiking neural networks (SNNs)
An adaptive spiking neuron model encodes signals using $SigmaDelta$-modulation and enables precise mapping.
We demonstrate the implementation of the lpRNN on Intel's neuromorphic research chip Loihi.
arXiv Detail & Related papers (2024-07-18T14:06:07Z) - HASNAS: A Hardware-Aware Spiking Neural Architecture Search Framework for Neuromorphic Compute-in-Memory Systems [6.006032394972252]
Spiking Neural Networks (SNNs) have shown capabilities for solving diverse machine learning tasks with ultra-low-power/energy computation.
We propose HASNAS, a novel hardware-aware spiking neural architecture search framework for neuromorphic CIM systems.
arXiv Detail & Related papers (2024-06-30T09:51:58Z) - LitE-SNN: Designing Lightweight and Efficient Spiking Neural Network through Spatial-Temporal Compressive Network Search and Joint Optimization [48.41286573672824]
Spiking Neural Networks (SNNs) mimic the information-processing mechanisms of the human brain and are highly energy-efficient.
We propose a new approach named LitE-SNN that incorporates both spatial and temporal compression into the automated network design process.
arXiv Detail & Related papers (2024-01-26T05:23:11Z) - Highly Efficient SNNs for High-speed Object Detection [7.3074002563489024]
Experimental results show that our efficient SNN can achieve 118X speedup on GPU with only 1.5MB parameters for object detection tasks.
We further verify our SNN on FPGA platform and the proposed model can achieve 800+FPS object detection with extremely low latency.
arXiv Detail & Related papers (2023-09-27T10:31:12Z) - Are SNNs Truly Energy-efficient? $-$ A Hardware Perspective [7.539212567508529]
Spiking Neural Networks (SNNs) have gained attention for their energy-efficient machine learning capabilities.
This work studies two hardware benchmarking platforms for large-scale SNN inference, namely SATA and SpikeSim.
arXiv Detail & Related papers (2023-09-06T22:23:22Z) - Input-Aware Dynamic Timestep Spiking Neural Networks for Efficient
In-Memory Computing [7.738130109655604]
Spiking Neural Networks (SNNs) have attracted widespread research interest because of their capability to process sparse and binary spike information.
We show that the energy cost and latency of SNNs scale linearly with the number of timesteps used on IMC hardware.
We propose input-aware Dynamic Timestep SNN (DT-SNN) to maximize the efficiency of SNNs.
arXiv Detail & Related papers (2023-05-27T03:01:27Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.