Temporal-wise Attention Spiking Neural Networks for Event Streams
Classification
- URL: http://arxiv.org/abs/2107.11711v1
- Date: Sun, 25 Jul 2021 02:28:44 GMT
- Title: Temporal-wise Attention Spiking Neural Networks for Event Streams
Classification
- Authors: Man Yao, Huanhuan Gao, Guangshe Zhao, Dingheng Wang, Yihan Lin, Zhaoxu
Yang, Guoqi Li
- Abstract summary: Spiking neural network (SNN) is a brain-triggered event-triggered computing model.
In this work, we propose a temporal-wise attention SNN model to learn frame-based representation for processing event streams.
We demonstrate that TA-SNN models improve the accuracy of event streams classification tasks.
- Score: 6.623034896340885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: How to effectively and efficiently deal with spatio-temporal event streams,
where the events are generally sparse and non-uniform and have the microsecond
temporal resolution, is of great value and has various real-life applications.
Spiking neural network (SNN), as one of the brain-inspired event-triggered
computing models, has the potential to extract effective spatio-temporal
features from the event streams. However, when aggregating individual events
into frames with a new higher temporal resolution, existing SNN models do not
attach importance to that the serial frames have different signal-to-noise
ratios since event streams are sparse and non-uniform. This situation
interferes with the performance of existing SNNs. In this work, we propose a
temporal-wise attention SNN (TA-SNN) model to learn frame-based representation
for processing event streams. Concretely, we extend the attention concept to
temporal-wise input to judge the significance of frames for the final decision
at the training stage, and discard the irrelevant frames at the inference
stage. We demonstrate that TA-SNN models improve the accuracy of event streams
classification tasks. We also study the impact of multiple-scale temporal
resolutions for frame-based representation. Our approach is tested on three
different classification tasks: gesture recognition, image classification, and
spoken digit recognition. We report the state-of-the-art results on these
tasks, and get the essential improvement of accuracy (almost 19\%) for gesture
recognition with only 60 ms.
Related papers
- Towards Low-latency Event-based Visual Recognition with Hybrid Step-wise Distillation Spiking Neural Networks [50.32980443749865]
Spiking neural networks (SNNs) have garnered significant attention for their low power consumption and high biologicalability.
Current SNNs struggle to balance accuracy and latency in neuromorphic datasets.
We propose Step-wise Distillation (HSD) method, tailored for neuromorphic datasets.
arXiv Detail & Related papers (2024-09-19T06:52:34Z) - Representation Learning on Event Stream via an Elastic Net-incorporated
Tensor Network [1.9515859963221267]
We present a novel representation method which can capture global correlations of all events in the event stream simultaneously.
Our method can achieve effective results in applications like filtering noise compared with the state-of-the-art methods.
arXiv Detail & Related papers (2024-01-16T02:51:47Z) - A Distance Correlation-Based Approach to Characterize the Effectiveness of Recurrent Neural Networks for Time Series Forecasting [1.9950682531209158]
We provide an approach to link time series characteristics with RNN components via the versatile metric of distance correlation.
We empirically show that the RNN activation layers learn the lag structures of time series well.
We also show that the activation layers cannot adequately model moving average and heteroskedastic time series processes.
arXiv Detail & Related papers (2023-07-28T22:32:08Z) - Razor SNN: Efficient Spiking Neural Network with Temporal Embeddings [20.048679993279936]
Event streams generated by dynamic vision sensors (DVS) are sparse and non-uniform in the spatial domain.
We propose an events sparsification spiking framework dubbed as Razor SNN, pruning pointless event frames progressively.
Our Razor SNN achieves competitive performance consistently on four events-based benchmarks.
arXiv Detail & Related papers (2023-06-30T12:17:30Z) - Temporal Contrastive Learning for Spiking Neural Networks [23.963069990569714]
Biologically inspired neural networks (SNNs) have garnered considerable attention due to their low-energy consumption and better-temporal information processing capabilities.
We propose a novel method to obtain SNNs with low latency and high performance by incorporating contrastive supervision with temporal domain information.
arXiv Detail & Related papers (2023-05-23T10:31:46Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - Ultra-low Latency Spiking Neural Networks with Spatio-Temporal
Compression and Synaptic Convolutional Block [4.081968050250324]
Spiking neural networks (SNNs) have neuro-temporal information capability, low processing feature, and high biological plausibility.
Neuro-MNIST, CIFAR10-S, DVS128 gesture datasets need to aggregate individual events into frames with a higher temporal resolution for event stream classification.
We propose a processing-temporal compression method to aggregate individual events into a few time steps of NIST current to reduce the training and inference latency.
arXiv Detail & Related papers (2022-03-18T15:14:13Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - Continuity-Discrimination Convolutional Neural Network for Visual Object
Tracking [150.51667609413312]
This paper proposes a novel model, named Continuity-Discrimination Convolutional Neural Network (CD-CNN) for visual object tracking.
To address this problem, CD-CNN models temporal appearance continuity based on the idea of temporal slowness.
In order to alleviate inaccurate target localization and drifting, we propose a novel notion, object-centroid.
arXiv Detail & Related papers (2021-04-18T06:35:03Z) - A Prospective Study on Sequence-Driven Temporal Sampling and Ego-Motion
Compensation for Action Recognition in the EPIC-Kitchens Dataset [68.8204255655161]
Action recognition is one of the top-challenging research fields in computer vision.
ego-motion recorded sequences have become of important relevance.
The proposed method aims to cope with it by estimating this ego-motion or camera motion.
arXiv Detail & Related papers (2020-08-26T14:44:45Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.