A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
- URL: http://arxiv.org/abs/2209.10837v3
- Date: Sun, 28 May 2023 09:44:32 GMT
- Title: A Spatial-channel-temporal-fused Attention for Spiking Neural Networks
- Authors: Wuque Cai, Hongze Sun, Rui Liu, Yan Cui, Jun Wang, Yang Xia, Dezhong
Yao, and Daqing Guo
- Abstract summary: Spiking neural networks (SNNs) mimic computational strategies, and exhibit substantial capabilities in processing information.
We propose a new spatial-channel-temporal-fused attention (SCTFA) module that can guide SNNs to efficiently capture underlying target regions.
- Score: 7.759491656618468
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking neural networks (SNNs) mimic brain computational strategies, and
exhibit substantial capabilities in spatiotemporal information processing. As
an essential factor for human perception, visual attention refers to the
dynamic process for selecting salient regions in biological vision systems.
Although visual attention mechanisms have achieved great success in computer
vision applications, they are rarely introduced into SNNs. Inspired by
experimental observations on predictive attentional remapping, we propose a new
spatial-channel-temporal-fused attention (SCTFA) module that can guide SNNs to
efficiently capture underlying target regions by utilizing accumulated
historical spatial-channel information in the present study. Through a
systematic evaluation on three event stream datasets (DVS Gesture,
SL-Animals-DVS and MNIST-DVS), we demonstrate that the SNN with the SCTFA
module (SCTFA-SNN) not only significantly outperforms the baseline SNN (BL-SNN)
and two other SNN models with degenerated attention modules, but also achieves
competitive accuracy with existing state-of-the-art methods. Additionally, our
detailed analysis shows that the proposed SCTFA-SNN model has strong robustness
to noise and outstanding stability when faced with incomplete data, while
maintaining acceptable complexity and efficiency. Overall, these findings
indicate that incorporating appropriate cognitive mechanisms of the brain may
provide a promising approach to elevate the capabilities of SNNs.
Related papers
- Enhancing SNN-based Spatio-Temporal Learning: A Benchmark Dataset and Cross-Modality Attention Model [30.66645039322337]
High-quality benchmark datasets are great importance to the advances of Artificial Neural Networks (SNNs)
Yet, the SNN-based cross-modal fusion remains underexplored.
In this work, we present a neuromorphic dataset that can better exploit the inherent-temporal betemporal of SNNs.
arXiv Detail & Related papers (2024-10-21T06:59:04Z) - Training Spiking Neural Networks via Augmented Direct Feedback Alignment [3.798885293742468]
Spiking neural networks (SNNs) are promising solutions for implementing neural networks in neuromorphic devices.
However, the nondifferentiable nature of SNN neurons makes it a challenge to train them.
In this paper, we propose using augmented direct feedback alignment (aDFA), a gradient-free approach based on random projection, to train SNNs.
arXiv Detail & Related papers (2024-09-12T06:22:44Z) - Fully Spiking Denoising Diffusion Implicit Models [61.32076130121347]
Spiking neural networks (SNNs) have garnered considerable attention owing to their ability to run on neuromorphic devices with super-high speeds.
We propose a novel approach fully spiking denoising diffusion implicit model (FSDDIM) to construct a diffusion model within SNNs.
We demonstrate that the proposed method outperforms the state-of-the-art fully spiking generative model.
arXiv Detail & Related papers (2023-12-04T09:07:09Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Transferability of coVariance Neural Networks and Application to
Interpretable Brain Age Prediction using Anatomical Features [119.45320143101381]
Graph convolutional networks (GCN) leverage topology-driven graph convolutional operations to combine information across the graph for inference tasks.
We have studied GCNs with covariance matrices as graphs in the form of coVariance neural networks (VNNs)
VNNs inherit the scale-free data processing architecture from GCNs and here, we show that VNNs exhibit transferability of performance over datasets whose covariance matrices converge to a limit object.
arXiv Detail & Related papers (2023-05-02T22:15:54Z) - STSC-SNN: Spatio-Temporal Synaptic Connection with Temporal Convolution
and Attention for Spiking Neural Networks [7.422913384086416]
Spiking Neural Networks (SNNs), as one of the algorithmic models in neuromorphic computing, have gained a great deal of research attention owing to temporal processing capability.
Existing synaptic structures in SNNs are almost full-connections or spatial 2D convolution, neither which can extract temporal dependencies adequately.
We take inspiration from biological synapses and propose a synaptic connection SNN model, to enhance the synapse-temporal receptive fields of synaptic connections.
We show that endowing synaptic models with temporal dependencies can improve the performance of SNNs on classification tasks.
arXiv Detail & Related papers (2022-10-11T08:13:22Z) - On the Intrinsic Structures of Spiking Neural Networks [66.57589494713515]
Recent years have emerged a surge of interest in SNNs owing to their remarkable potential to handle time-dependent and event-driven data.
There has been a dearth of comprehensive studies examining the impact of intrinsic structures within spiking computations.
This work delves deep into the intrinsic structures of SNNs, by elucidating their influence on the expressivity of SNNs.
arXiv Detail & Related papers (2022-06-21T09:42:30Z) - Spiking Neural Networks for Visual Place Recognition via Weighted
Neuronal Assignments [24.754429120321365]
Spiking neural networks (SNNs) offer both compelling potential advantages, including energy efficiency and low latencies.
One promising area for high performance SNNs is template matching and image recognition.
This research introduces the first high performance SNN for the Visual Place Recognition (VPR) task.
arXiv Detail & Related papers (2021-09-14T05:40:40Z) - Exploiting Spiking Dynamics with Spatial-temporal Feature Normalization
in Graph Learning [9.88508686848173]
Biological spiking neurons with intrinsic dynamics underlie the powerful representation and learning capabilities of the brain.
Despite recent tremendous progress in spiking neural networks (SNNs) for handling Euclidean-space tasks, it still remains challenging to exploit SNNs in processing non-Euclidean-space data.
Here we present a general spike-based modeling framework that enables the direct training of SNNs for graph learning.
arXiv Detail & Related papers (2021-06-30T11:20:16Z) - On the benefits of robust models in modulation recognition [53.391095789289736]
Deep Neural Networks (DNNs) using convolutional layers are state-of-the-art in many tasks in communications.
In other domains, like image classification, DNNs have been shown to be vulnerable to adversarial perturbations.
We propose a novel framework to test the robustness of current state-of-the-art models.
arXiv Detail & Related papers (2021-03-27T19:58:06Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.