A Spike Learning System for Event-driven Object Recognition
- URL: http://arxiv.org/abs/2101.08850v1
- Date: Thu, 21 Jan 2021 20:57:53 GMT
- Title: A Spike Learning System for Event-driven Object Recognition
- Authors: Shibo Zhou, Wei Wang, Xiaohua Li, Zhanpeng Jin
- Abstract summary: Event-driven sensors such as LiDAR and dynamic vision sensor (DVS) have found increased attention in high-resolution and high-speed applications.
We present a spiking learning system that uses the spiking neural network (SNN) with a novel temporal coding for accurate and fast object recognition.
- Score: 8.875351982997554
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-driven sensors such as LiDAR and dynamic vision sensor (DVS) have found
increased attention in high-resolution and high-speed applications. A lot of
work has been conducted to enhance recognition accuracy. However, the essential
topic of recognition delay or time efficiency is largely under-explored. In
this paper, we present a spiking learning system that uses the spiking neural
network (SNN) with a novel temporal coding for accurate and fast object
recognition. The proposed temporal coding scheme maps each event's arrival time
and data into SNN spike time so that asynchronously-arrived events are
processed immediately without delay. The scheme is integrated nicely with the
SNN's asynchronous processing capability to enhance time efficiency. A key
advantage over existing systems is that the event accumulation time for each
recognition task is determined automatically by the system rather than pre-set
by the user. The system can finish recognition early without waiting for all
the input events. Extensive experiments were conducted over a list of 7 LiDAR
and DVS datasets. The results demonstrated that the proposed system had
state-of-the-art recognition accuracy while achieving remarkable time
efficiency. Recognition delay was shown to reduce by 56.3% to 91.7% in various
experiment settings over the popular KITTI dataset.
Related papers
- VALO: A Versatile Anytime Framework for LiDAR-based Object Detection Deep Neural Networks [4.953750672237398]
This work addresses the challenge of adapting dynamic deadline requirements for LiDAR object detection deep neural networks (DNNs)
We introduce VALO (Versatile Anytime algorithm for LiDAR Object detection), a novel data-centric approach that enables anytime computing of 3D LiDAR object detection DNNs.
We implement VALO on state-of-the-art 3D LiDAR object detection networks, namely CenterPoint and VoxelNext, and demonstrate its dynamic adaptability to a wide range of time constraints.
arXiv Detail & Related papers (2024-09-17T20:30:35Z) - RIDE: Real-time Intrusion Detection via Explainable Machine Learning
Implemented in a Memristor Hardware Architecture [24.824596231020585]
We propose a packet-level network intrusion detection solution that makes use of Recurrent Autoencoders to integrate an arbitrary-length sequence of packets into a more compact joint feature embedding.
We show that our approach leads to an extremely efficient, real-time solution with high detection accuracy at the packet level.
arXiv Detail & Related papers (2023-11-27T17:30:19Z) - Braille Letter Reading: A Benchmark for Spatio-Temporal Pattern
Recognition on Neuromorphic Hardware [50.380319968947035]
Recent deep learning approaches have reached accuracy in such tasks, but their implementation on conventional embedded solutions is still computationally very and energy expensive.
We propose a new benchmark for computing tactile pattern recognition at the edge through letters reading.
We trained and compared feed-forward and recurrent spiking neural networks (SNNs) offline using back-propagation through time with surrogate gradients, then we deployed them on the Intel Loihimorphic chip for efficient inference.
Our results show that the LSTM outperforms the recurrent SNN in terms of accuracy by 14%. However, the recurrent SNN on Loihi is 237 times more energy
arXiv Detail & Related papers (2022-05-30T14:30:45Z) - Real-Time Activity Recognition and Intention Recognition Using a
Vision-based Embedded System [4.060731229044571]
We introduce a real-time activity recognition to recognize people's intentions to pass or not pass a door.
This system, if applied in elevators and automatic doors will save energy and increase efficiency.
Our embedded system was implemented with an accuracy of 98.78% on our Intention Recognition dataset.
arXiv Detail & Related papers (2021-07-27T11:38:44Z) - Deep Cellular Recurrent Network for Efficient Analysis of Time-Series
Data with Spatial Information [52.635997570873194]
This work proposes a novel deep cellular recurrent neural network (DCRNN) architecture to process complex multi-dimensional time series data with spatial information.
The proposed architecture achieves state-of-the-art performance while utilizing substantially less trainable parameters when compared to comparable methods in the literature.
arXiv Detail & Related papers (2021-01-12T20:08:18Z) - Identity-Aware Attribute Recognition via Real-Time Distributed Inference
in Mobile Edge Clouds [53.07042574352251]
We design novel models for pedestrian attribute recognition with re-ID in an MEC-enabled camera monitoring system.
We propose a novel inference framework with a set of distributed modules, by jointly considering the attribute recognition and person re-ID.
We then devise a learning-based algorithm for the distributions of the modules of the proposed distributed inference framework.
arXiv Detail & Related papers (2020-08-12T12:03:27Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Deep ConvLSTM with self-attention for human activity decoding using
wearables [0.0]
We propose a deep neural network architecture that captures features of multiple sensor time-series data but also selects important time points.
We show the validity of the proposed approach across different data sampling strategies and demonstrate that the self-attention mechanism gave a significant improvement.
The proposed methods open avenues for better decoding of human activity from multiple body sensors over extended periods time.
arXiv Detail & Related papers (2020-05-02T04:30:31Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z) - Temporal Pulses Driven Spiking Neural Network for Fast Object
Recognition in Autonomous Driving [65.36115045035903]
We propose an approach to address the object recognition problem directly with raw temporal pulses utilizing the spiking neural network (SNN)
Being evaluated on various datasets, our proposed method has shown comparable performance as the state-of-the-art methods, while achieving remarkable time efficiency.
arXiv Detail & Related papers (2020-01-24T22:58:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.