A dynamic vision sensor object recognition model based on trainable event-driven convolution and spiking attention mechanism
- URL: http://arxiv.org/abs/2409.12691v1
- Date: Thu, 19 Sep 2024 12:01:05 GMT
- Title: A dynamic vision sensor object recognition model based on trainable event-driven convolution and spiking attention mechanism
- Authors: Peng Zheng, Qian Zhou,
- Abstract summary: Spiking Neural Networks (SNNs) are well-suited for processing event streams from Dynamic Visual Sensors (DVSs)
To extract features from DVS objects, SNNs commonly use event-driven convolution with fixed kernel parameters.
We propose a DVS object recognition model that utilizes a trainable event-driven convolution and a spiking attention mechanism.
- Score: 9.745798797360886
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) are well-suited for processing event streams from Dynamic Visual Sensors (DVSs) due to their use of sparse spike-based coding and asynchronous event-driven computation. To extract features from DVS objects, SNNs commonly use event-driven convolution with fixed kernel parameters. These filters respond strongly to features in specific orientations while disregarding others, leading to incomplete feature extraction. To improve the current event-driven convolution feature extraction capability of SNNs, we propose a DVS object recognition model that utilizes a trainable event-driven convolution and a spiking attention mechanism. The trainable event-driven convolution is proposed in this paper to update its convolution kernel through gradient descent. This method can extract local features of the event stream more efficiently than traditional event-driven convolution. Furthermore, the spiking attention mechanism is used to extract global dependence features. The classification performances of our model are better than the baseline methods on two neuromorphic datasets including MNIST-DVS and the more complex CIFAR10-DVS. Moreover, our model showed good classification ability for short event streams. It was shown that our model can improve the performance of event-driven convolutional SNNs for DVS objects.
Related papers
- Unveiling the Power of Sparse Neural Networks for Feature Selection [60.50319755984697]
Sparse Neural Networks (SNNs) have emerged as powerful tools for efficient feature selection.
We show that SNNs trained with dynamic sparse training (DST) algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
Our findings show that feature selection with SNNs trained with DST algorithms can achieve, on average, more than $50%$ memory and $55%$ FLOPs reduction.
arXiv Detail & Related papers (2024-08-08T16:48:33Z) - Enhancing Adaptive History Reserving by Spiking Convolutional Block
Attention Module in Recurrent Neural Networks [21.509659756334802]
Spiking neural networks (SNNs) serve as one type of efficient model to processtemporal-temporal patterns in time series.
In this paper, we develop a recurrent spiking neural network (RSNN) model embedded with an advanced spiking convolutional attention module (SCBAM) component.
It invokes the history information in spatial and temporal channels adaptively through SCBAM which brings the advantages of efficient memory calling history and redundancy elimination.
arXiv Detail & Related papers (2024-01-08T08:05:34Z) - Automotive Object Detection via Learning Sparse Events by Spiking Neurons [20.930277906912394]
Spiking Neural Networks (SNNs) provide a temporal representation that is inherently aligned with event-based data.
We present a specialized spiking feature pyramid network (SpikeFPN) optimized for automotive event-based object detection.
arXiv Detail & Related papers (2023-07-24T15:47:21Z) - Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings [20.048679993279936]
Event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain.
We propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively.
Our Razor SNN achieves competitive performance consistently on four events-based benchmarks.
arXiv Detail & Related papers (2023-06-30T12:17:30Z) - Accurate and Efficient Event-based Semantic Segmentation Using Adaptive Spiking Encoder-Decoder Network [20.05283214295881]
Spiking neural networks (SNNs) are emerging as promising solutions for processing dynamic, asynchronous signals from event-based sensors.
We develop an efficient spiking encoder-decoder network (SpikingEDN) for large-scale event-based semantic segmentation tasks.
We harness the adaptive threshold which improves network accuracy, sparsity and robustness in streaming inference.
arXiv Detail & Related papers (2023-04-24T07:12:50Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Adversarial Feature Augmentation and Normalization for Visual
Recognition [109.6834687220478]
Recent advances in computer vision take advantage of adversarial data augmentation to ameliorate the generalization ability of classification models.
Here, we present an effective and efficient alternative that advocates adversarial augmentation on intermediate feature embeddings.
We validate the proposed approach across diverse visual recognition tasks with representative backbone networks.
arXiv Detail & Related papers (2021-03-22T20:36:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.