DOTIE -- Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture
- URL: http://arxiv.org/abs/2210.00975v1
- Date: Mon, 3 Oct 2022 14:43:11 GMT
- Title: DOTIE -- Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture
- Authors: Manish Nagaraj, Chamika Mihiranga Liyanagedera and Kaushik Roy
- Abstract summary: Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles.
We propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects.
We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.
- Score: 5.340730281227837
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision-based autonomous navigation systems rely on fast and accurate object
detection algorithms to avoid obstacles. Algorithms and sensors designed for
such systems need to be computationally efficient, due to the limited energy of
the hardware used for deployment. Biologically inspired event cameras are a
good candidate as a vision sensor for such systems due to their speed, energy
efficiency, and robustness to varying lighting conditions. However, traditional
computer vision algorithms fail to work on event-based outputs, as they lack
photometric features such as light intensity and texture. In this work, we
propose a novel technique that utilizes the temporal information inherently
present in the events to efficiently detect moving objects. Our technique
consists of a lightweight spiking neural architecture that is able to separate
events based on the speed of the corresponding objects. These separated events
are then further grouped spatially to determine object boundaries. This method
of object detection is both asynchronous and robust to camera noise. In
addition, it shows good performance in scenarios with events generated by
static objects in the background, where existing event-based algorithms fail.
We show that by utilizing our architecture, autonomous navigation systems can
have minimal latency and energy overheads for performing object detection.
Related papers
- Deep Event-based Object Detection in Autonomous Driving: A Survey [7.197775088663435]
Event cameras have emerged as promising sensors for autonomous driving due to their low latency, high dynamic range, and low power consumption.
This paper provides an overview of object detection using event data in autonomous driving, showcasing the competitive benefits of event cameras.
arXiv Detail & Related papers (2024-05-07T04:17:04Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - Performance Study of YOLOv5 and Faster R-CNN for Autonomous Navigation
around Non-Cooperative Targets [0.0]
This paper discusses how the combination of cameras and machine learning algorithms can achieve the relative navigation task.
The performance of two deep learning-based object detection algorithms, Faster Region-based Convolutional Neural Networks (R-CNN) and You Only Look Once (YOLOv5) is tested.
The paper discusses the path to implementing the feature recognition algorithms and towards integrating them into the spacecraft Guidance Navigation and Control system.
arXiv Detail & Related papers (2023-01-22T04:53:38Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Analysis of voxel-based 3D object detection methods efficiency for
real-time embedded systems [93.73198973454944]
Two popular voxel-based 3D object detection methods are studied in this paper.
Our experiments show that these methods mostly fail to detect distant small objects due to the sparsity of the input point clouds at large distances.
Our findings suggest that a considerable part of the computations of existing methods is focused on locations of the scene that do not contribute with successful detection.
arXiv Detail & Related papers (2021-05-21T12:40:59Z) - Object-based Illumination Estimation with Rendering-aware Neural
Networks [56.01734918693844]
We present a scheme for fast environment light estimation from the RGBD appearance of individual objects and their local image areas.
With the estimated lighting, virtual objects can be rendered in AR scenarios with shading that is consistent to the real scene.
arXiv Detail & Related papers (2020-08-06T08:23:19Z) - A Hybrid Neuromorphic Object Tracking and Classification Framework for
Real-time Systems [5.959466944163293]
This paper proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras.
Unlike traditional approaches of using event-by-event processing, this work uses a mixed frame and event approach to get energy savings with high performance.
arXiv Detail & Related papers (2020-07-21T07:11:27Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z) - Real-Time Object Detection and Recognition on Low-Compute Humanoid
Robots using Deep Learning [0.12599533416395764]
We describe a novel architecture that enables multiple low-compute NAO robots to perform real-time detection, recognition and localization of objects in its camera view.
The proposed algorithm for object detection and localization is an empirical modification of YOLOv3, based on indoor experiments in multiple scenarios.
The architecture also comprises of an effective end-to-end pipeline to feed the real-time frames from the camera feed to the neural net and use its results for guiding the robot.
arXiv Detail & Related papers (2020-01-20T05:24:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.