A Hybrid Neuromorphic Object Tracking and Classification Framework for
Real-time Systems
- URL: http://arxiv.org/abs/2007.11404v1
- Date: Tue, 21 Jul 2020 07:11:27 GMT
- Title: A Hybrid Neuromorphic Object Tracking and Classification Framework for
Real-time Systems
- Authors: Andres Ussa, Chockalingam Senthil Rajen, Deepak Singla, Jyotibdha
Acharya, Gideon Fu Chuanrong, Arindam Basu and Bharath Ramesh
- Abstract summary: This paper proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras.
Unlike traditional approaches of using event-by-event processing, this work uses a mixed frame and event approach to get energy savings with high performance.
- Score: 5.959466944163293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning inference that needs to largely take place on the 'edge' is a
highly computational and memory intensive workload, making it intractable for
low-power, embedded platforms such as mobile nodes and remote security
applications. To address this challenge, this paper proposes a real-time,
hybrid neuromorphic framework for object tracking and classification using
event-based cameras that possess properties such as low-power consumption (5-14
mW) and high dynamic range (120 dB). Nonetheless, unlike traditional approaches
of using event-by-event processing, this work uses a mixed frame and event
approach to get energy savings with high performance. Using a frame-based
region proposal method based on the density of foreground events, a
hardware-friendly object tracking scheme is implemented using the apparent
object velocity while tackling occlusion scenarios. The object track input is
converted back to spikes for TrueNorth classification via the energy-efficient
deep network (EEDN) pipeline. Using originally collected datasets, we train the
TrueNorth model on the hardware track outputs, instead of using ground truth
object locations as commonly done, and demonstrate the ability of our system to
handle practical surveillance scenarios. As an optional paradigm, to exploit
the low latency and asynchronous nature of neuromorphic vision sensors (NVS),
we also propose a continuous-time tracker with C++ implementation where each
event is processed individually. Thereby, we extensively compare the proposed
methodologies to state-of-the-art event-based and frame-based methods for
object tracking and classification, and demonstrate the use case of our
neuromorphic approach for real-time and embedded applications without
sacrificing performance. Finally, we also showcase the efficacy of the proposed
system to a standard RGB camera setup when evaluated over several hours of
traffic recordings.
Related papers
- Detecting Every Object from Events [24.58024539462497]
We propose Detecting Every Object in Events (DEOE), an approach tailored for achieving high-speed, class-agnostic open-world object detection in event-based vision.
Our code is available at https://github.com/Hatins/DEOE.
arXiv Detail & Related papers (2024-04-08T08:20:53Z) - Exploring Dynamic Transformer for Efficient Object Tracking [58.120191254379854]
We propose DyTrack, a dynamic transformer framework for efficient tracking.
DyTrack automatically learns to configure proper reasoning routes for various inputs, gaining better utilization of the available computational budget.
Experiments on multiple benchmarks demonstrate that DyTrack achieves promising speed-precision trade-offs with only a single model.
arXiv Detail & Related papers (2024-03-26T12:31:58Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event-Free Moving Object Segmentation from Moving Ego Vehicle [88.33470650615162]
Moving object segmentation (MOS) in dynamic scenes is an important, challenging, but under-explored research topic for autonomous driving.
Most segmentation methods leverage motion cues obtained from optical flow maps.
We propose to exploit event cameras for better video understanding, which provide rich motion cues without relying on optical flow.
arXiv Detail & Related papers (2023-04-28T23:43:10Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Neural Implicit Event Generator for Motion Tracking [13.312655893024658]
We present a novel framework of motion tracking from event data using implicit expression.
Our framework use pre-trained event generation named implicit event generator (IEG) and does motion tracking by updating its state (position and velocity) based on the difference between the observed event and generated event from the current state estimate.
We have confirmed that our framework works well in real-world environments in the presence of noise and background clutter.
arXiv Detail & Related papers (2021-11-06T07:38:52Z) - EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary
Dynamic Vision Sensors [5.674895233111088]
This paper presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor.
To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration.
This is the first time a stationary DVS based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods.
arXiv Detail & Related papers (2020-05-31T03:01:35Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.