Highly Efficient SNNs for High-speed Object Detection
- URL: http://arxiv.org/abs/2309.15883v1
- Date: Wed, 27 Sep 2023 10:31:12 GMT
- Title: Highly Efficient SNNs for High-speed Object Detection
- Authors: Nemin Qiu and Zhiguo Li and Yuan Li and Chuang Zhu
- Abstract summary: Experimental results show that our efficient SNN can achieve 118X speedup on GPU with only 1.5MB parameters for object detection tasks.
We further verify our SNN on FPGA platform and the proposed model can achieve 800+FPS object detection with extremely low latency.
- Score: 7.3074002563489024
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The high biological properties and low energy consumption of Spiking Neural
Networks (SNNs) have brought much attention in recent years. However, the
converted SNNs generally need large time steps to achieve satisfactory
performance, which will result in high inference latency and computational
resources increase. In this work, we propose a highly efficient and fast SNN
for object detection. First, we build an initial compact ANN by using
quantization training method of convolution layer fold batch normalization
layer and neural network modification. Second, we theoretically analyze how to
obtain the low complexity SNN correctly. Then, we propose a scale-aware
pseudoquantization scheme to guarantee the correctness of the compact ANN to
SNN. Third, we propose a continuous inference scheme by using a Feed-Forward
Integrate-and-Fire (FewdIF) neuron to realize high-speed object detection.
Experimental results show that our efficient SNN can achieve 118X speedup on
GPU with only 1.5MB parameters for object detection tasks. We further verify
our SNN on FPGA platform and the proposed model can achieve 800+FPS object
detection with extremely low latency.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - A Hybrid SNN-ANN Network for Event-based Object Detection with Spatial and Temporal Attention [2.5075774828443467]
Event cameras offer high temporal resolution and dynamic range with minimal motion blur, making them promising for object detection tasks.
While Spiking Neural Networks (SNNs) are a natural match for event-based sensory data, Artificial Neural Networks (ANNs) tend to display more stable training dynamics.
We introduce the first Hybrid Attention-based SNN-ANN backbone for object detection using event cameras.
arXiv Detail & Related papers (2024-03-15T10:28:31Z) - Low Latency of object detection for spikng neural network [3.404826786562694]
Spiking Neural Networks are well-suited for edge AI applications due to their binary spike nature.
In this paper, we focus on generating highly accurate and low-latency SNNs specifically for object detection.
arXiv Detail & Related papers (2023-09-27T10:26:19Z) - High-performance deep spiking neural networks with 0.3 spikes per neuron [9.01407445068455]
It is hard to train biologically-inspired spiking neural networks (SNNs) than artificial neural networks (ANNs)
We show that training deep SNN models achieves the exact same performance as that of ANNs.
Our SNN accomplishes high-performance classification with less than 0.3 spikes per neuron, lending itself for an energy-efficient implementation.
arXiv Detail & Related papers (2023-06-14T21:01:35Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Can Deep Neural Networks be Converted to Ultra Low-Latency Spiking
Neural Networks? [3.2108350580418166]
Spiking neural networks (SNNs) operate via binary spikes distributed over time.
SOTA training strategies for SNNs involve conversion from a non-spiking deep neural network (DNN)
We propose a new training algorithm that accurately captures these distributions, minimizing the error between the DNN and converted SNN.
arXiv Detail & Related papers (2021-12-22T18:47:45Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Sub-bit Neural Networks: Learning to Compress and Accelerate Binary
Neural Networks [72.81092567651395]
Sub-bit Neural Networks (SNNs) are a new type of binary quantization design tailored to compress and accelerate BNNs.
SNNs are trained with a kernel-aware optimization framework, which exploits binary quantization in the fine-grained convolutional kernel space.
Experiments on visual recognition benchmarks and the hardware deployment on FPGA validate the great potentials of SNNs.
arXiv Detail & Related papers (2021-10-18T11:30:29Z) - Toward Trainability of Quantum Neural Networks [87.04438831673063]
Quantum Neural Networks (QNNs) have been proposed as generalizations of classical neural networks to achieve the quantum speed-up.
Serious bottlenecks exist for training QNNs due to the vanishing with gradient rate exponential to the input qubit number.
We show that QNNs with tree tensor and step controlled structures for the application of binary classification. Simulations show faster convergent rates and better accuracy compared to QNNs with random structures.
arXiv Detail & Related papers (2020-11-12T08:32:04Z) - SiamSNN: Siamese Spiking Neural Networks for Energy-Efficient Object
Tracking [20.595208488431766]
SiamSNN is the first deep SNN tracker that achieves short latency and low precision loss on the visual object tracking benchmarks OTB2013, VOT2016, and GOT-10k.
SiamSNN notably achieves low energy consumption and real-time on Neuromorphic chip TrueNorth.
arXiv Detail & Related papers (2020-03-17T08:49:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.