Sparse Compressed Spiking Neural Network Accelerator for Object
Detection
- URL: http://arxiv.org/abs/2205.00778v1
- Date: Mon, 2 May 2022 09:56:55 GMT
- Title: Sparse Compressed Spiking Neural Network Accelerator for Object
Detection
- Authors: Hong-Han Lien and Tian-Sheuan Chang
- Abstract summary: Spiking neural networks (SNNs) are inspired by the human brain and transmit binary spikes and highly sparse activation maps.
This paper proposes a sparse compressed spiking neural network accelerator that takes advantage of the high sparsity of activation maps and weights.
The experimental result of the neural network shows 71.5$%$ mAP with mixed (1,3) time steps on the IVS 3cls dataset.
- Score: 0.1246030133914898
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Spiking neural networks (SNNs), which are inspired by the human brain, have
recently gained popularity due to their relatively simple and low-power
hardware for transmitting binary spikes and highly sparse activation maps.
However, because SNNs contain extra time dimension information, the SNN
accelerator will require more buffers and take longer to infer, especially for
the more difficult high-resolution object detection task. As a result, this
paper proposes a sparse compressed spiking neural network accelerator that
takes advantage of the high sparsity of activation maps and weights by
utilizing the proposed gated one-to-all product for low power and highly
parallel model execution. The experimental result of the neural network shows
71.5$\%$ mAP with mixed (1,3) time steps on the IVS 3cls dataset. The
accelerator with the TSMC 28nm CMOS process can achieve 1024$\times$576@29
frames per second processing when running at 500MHz with 35.88TOPS/W energy
efficiency and 1.05mJ energy consumption per frame.
Related papers
- Spiker+: a framework for the generation of efficient Spiking Neural
Networks FPGA accelerators for inference at the edge [49.42371633618761]
Spiker+ is a framework for generating efficient, low-power, and low-area customized Spiking Neural Networks (SNN) accelerators on FPGA for inference at the edge.
Spiker+ is tested on two benchmark datasets, the MNIST and the Spiking Heidelberg Digits (SHD)
arXiv Detail & Related papers (2024-01-02T10:42:42Z) - EPIM: Efficient Processing-In-Memory Accelerators based on Epitome [78.79382890789607]
We introduce the Epitome, a lightweight neural operator offering convolution-like functionality.
On the software side, we evaluate epitomes' latency and energy on PIM accelerators.
We introduce a PIM-aware layer-wise design method to enhance their hardware efficiency.
arXiv Detail & Related papers (2023-11-12T17:56:39Z) - Highly Efficient SNNs for High-speed Object Detection [7.3074002563489024]
Experimental results show that our efficient SNN can achieve 118X speedup on GPU with only 1.5MB parameters for object detection tasks.
We further verify our SNN on FPGA platform and the proposed model can achieve 800+FPS object detection with extremely low latency.
arXiv Detail & Related papers (2023-09-27T10:31:12Z) - A Resource-efficient Spiking Neural Network Accelerator Supporting
Emerging Neural Encoding [6.047137174639418]
Spiking neural networks (SNNs) recently gained momentum due to their low-power multiplication-free computing.
SNNs require very long spike trains (up to 1000) to reach an accuracy similar to their artificial neural network (ANN) counterparts for large models.
We present a novel hardware architecture that can efficiently support SNN with emerging neural encoding.
arXiv Detail & Related papers (2022-06-06T10:56:25Z) - Efficient Hardware Acceleration of Sparsely Active Convolutional Spiking
Neural Networks [0.0]
Spiking Neural Networks (SNNs) compute in an event-based matter to achieve a more efficient computation than standard Neural Networks.
We propose a novel architecture that is optimized for the processing of Convolutional SNNs that feature a high degree of activation sparsity.
arXiv Detail & Related papers (2022-03-23T14:18:58Z) - BiFSMN: Binary Neural Network for Keyword Spotting [47.46397208920726]
BiFSMN is an accurate and extreme-efficient binary neural network for KWS.
We show that BiFSMN can achieve an impressive 22.3x speedup and 15.5x storage-saving on real-world edge hardware.
arXiv Detail & Related papers (2022-02-14T05:16:53Z) - Event-based Video Reconstruction via Potential-assisted Spiking Neural
Network [48.88510552931186]
Bio-inspired neural networks can potentially lead to greater computational efficiency on event-driven hardware.
We propose a novel Event-based Video reconstruction framework based on a fully Spiking Neural Network (EVSNN)
We find that the spiking neurons have the potential to store useful temporal information (memory) to complete such time-dependent tasks.
arXiv Detail & Related papers (2022-01-25T02:05:20Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Sound Event Detection with Binary Neural Networks on Tightly
Power-Constrained IoT Devices [20.349809458335532]
Sound event detection (SED) is a hot topic in consumer and smart city applications.
Existing approaches based on Deep Neural Networks are very effective, but highly demanding in terms of memory, power, and throughput.
In this paper, we explore the combination of extreme quantization to a small-print binary neural network (BNN) with the highly energy-efficient, RISC-V-based (8+1)-core GAP8 microcontroller.
arXiv Detail & Related papers (2021-01-12T12:38:23Z) - A Spike in Performance: Training Hybrid-Spiking Neural Networks with
Quantized Activation Functions [6.574517227976925]
Spiking Neural Network (SNN) is a promising approach to energy-efficient computing.
We show how to maintain state-of-the-art accuracy when converting a non-spiking network into an SNN.
arXiv Detail & Related papers (2020-02-10T05:24:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.