SpikSSD: Better Extraction and Fusion for Object Detection with Spiking Neuron Networks
- URL: http://arxiv.org/abs/2501.15151v2
- Date: Tue, 28 Jan 2025 03:08:59 GMT
- Title: SpikSSD: Better Extraction and Fusion for Object Detection with Spiking Neuron Networks
- Authors: Yimeng Fan, Changsong Liu, Mingyang Li, Wei Zhang,
- Abstract summary: Spiking Neural Networks (SNNs) have gained widespread attention due to their low energy consumption and biological interpretability.
We propose the SpikSSD, a novel Spiking Single Shot Multibox Detector.
Specifically, we design a full-spiking backbone network, MDS-ResNet, which effectively adjusts the membrane synaptic input distribution at each layer.
For spiking feature fusion, we introduce the Spiking Bi-direction Fusion Module (SBFM), which for the first time realizes bi-direction fusion of spiking features.
- Score: 11.556160544636116
- License:
- Abstract: As the third generation of neural networks, Spiking Neural Networks (SNNs) have gained widespread attention due to their low energy consumption and biological interpretability. Recently, SNNs have made considerable advancements in computer vision. However, efficiently conducting feature extraction and fusion under the spiking characteristics of SNNs for object detection remains a pressing challenge. To address this problem, we propose the SpikSSD, a novel Spiking Single Shot Multibox Detector. Specifically, we design a full-spiking backbone network, MDS-ResNet, which effectively adjusts the membrane synaptic input distribution at each layer, achieving better spiking feature extraction. Additionally, for spiking feature fusion, we introduce the Spiking Bi-direction Fusion Module (SBFM), which for the first time realizes bi-direction fusion of spiking features, enhancing the multi-scale detection capability of the model. Experimental results show that SpikSSD achieves 40.8% mAP on the GEN1 dataset, 76.3% and 52.4% mAP@0.5 on VOC 2007 and COCO 2017 datasets respectively with the lowest firing rate, outperforming existing SNN-based approaches at ultralow energy consumption. This work sets a new benchmark for future research in SNN-based object detection. Our code is publicly available in https://github.com/yimeng-fan/SpikSSD.
Related papers
- Integer-Valued Training and Spike-Driven Inference Spiking Neural Network for High-performance and Energy-efficient Object Detection [15.154553304520164]
Spiking Neural Networks (SNNs) have bio-plaus and low-power advantages over Artificial Neural Networks (ANNs)
In this work, we focus on bridging the performance gap between ANNs and SNNs on object detection.
We design a SpikeYOLO architecture to solve this problem by simplifying the vanilla YOLO and incorporating meta SNN blocks.
arXiv Detail & Related papers (2024-07-30T10:04:16Z) - SFOD: Spiking Fusion Object Detector [10.888008544975662]
Spiking Fusion Object Detector (SFOD) is a simple and efficient approach to SNN-based object detection.
We design a Spiking Fusion Module, achieving the first-time fusion of feature maps from different scales in SNNs applied to event cameras.
We establish state-of-the-art classification results based on SNNs, achieving 93.7% accuracy on the NCAR dataset.
arXiv Detail & Related papers (2024-03-22T13:24:50Z) - Fully Spiking Actor Network with Intra-layer Connections for
Reinforcement Learning [51.386945803485084]
We focus on the task where the agent needs to learn multi-dimensional deterministic policies to control.
Most existing spike-based RL methods take the firing rate as the output of SNNs, and convert it to represent continuous action space (i.e., the deterministic policy) through a fully-connected layer.
To develop a fully spiking actor network without any floating-point matrix operations, we draw inspiration from the non-spiking interneurons found in insects.
arXiv Detail & Related papers (2024-01-09T07:31:34Z) - SpikingJelly: An open-source machine learning infrastructure platform
for spike-based intelligence [51.6943465041708]
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency.
We contribute a full-stack toolkit for pre-processing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips.
arXiv Detail & Related papers (2023-10-25T13:15:17Z) - Enabling energy-Efficient object detection with surrogate gradient
descent in spiking neural networks [0.40054215937601956]
Spiking Neural Networks (SNNs) are a biologically plausible neural network model with significant advantages in both event-driven processing and processing-temporal information.
In this study, we introduce the Current Mean Decoding (CMD) method, which solves the regression problem to facilitate the training of deep SNNs for object detection tasks.
Based on the gradient surrogate and CMD, we propose the SNN-YOLOv3 model for object detection.
arXiv Detail & Related papers (2023-09-07T15:48:00Z) - Deep Multi-Threshold Spiking-UNet for Image Processing [51.88730892920031]
This paper introduces the novel concept of Spiking-UNet for image processing, which combines the power of Spiking Neural Networks (SNNs) with the U-Net architecture.
To achieve an efficient Spiking-UNet, we face two primary challenges: ensuring high-fidelity information propagation through the network via spikes and formulating an effective training strategy.
Experimental results show that, on image segmentation and denoising, our Spiking-UNet achieves comparable performance to its non-spiking counterpart.
arXiv Detail & Related papers (2023-07-20T16:00:19Z) - Spiking Denoising Diffusion Probabilistic Models [11.018937744626387]
Spiking neural networks (SNNs) have ultra-low energy consumption and high biological plausibility.
We propose Spiking Denoising Diffusion Probabilistic Models (SDDPM), a new class of SNN-based generative models that achieve high sample quality.
Our approach achieves state-of-the-art on the generative tasks and substantially outperforms other SNN-based generative models.
arXiv Detail & Related papers (2023-06-29T15:43:06Z) - Spikingformer: Spike-driven Residual Learning for Transformer-based
Spiking Neural Network [19.932683405796126]
Spiking neural networks (SNNs) offer a promising energy-efficient alternative to artificial neural networks.
SNNs suffer from non-spike computations caused by the structure of their residual connection.
We develop Spikingformer, a pure transformer-based spiking neural network.
arXiv Detail & Related papers (2023-04-24T09:44:24Z) - Spikformer: When Spiking Neural Network Meets Transformer [102.91330530210037]
We consider two biologically plausible structures, the Spiking Neural Network (SNN) and the self-attention mechanism.
We propose a novel Spiking Self Attention (SSA) as well as a powerful framework, named Spiking Transformer (Spikformer)
arXiv Detail & Related papers (2022-09-29T14:16:49Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.