Night vision obstacle detection and avoidance based on Bio-Inspired
Vision Sensors
- URL: http://arxiv.org/abs/2010.15509v1
- Date: Thu, 29 Oct 2020 12:02:02 GMT
- Title: Night vision obstacle detection and avoidance based on Bio-Inspired
Vision Sensors
- Authors: Jawad N. Yasin, Sherif A.S. Mohamed, Mohammad-hashem Haghbayan, Jukka
Heikkonen, Hannu Tenhunen, Muhammad Mehboob Yasin, Juha Plosila
- Abstract summary: We exploit the powerful attributes of event-based cameras to perform obstacle detection in low lighting conditions.
The algorithm filters background activity noise and extracts objects using robust Hough transform technique.
The depth of each detected object is computed by triangulating 2D features extracted utilising LC-Harris.
- Score: 0.5079840826943617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Moving towards autonomy, unmanned vehicles rely heavily on state-of-the-art
collision avoidance systems (CAS). However, the detection of obstacles
especially during night-time is still a challenging task since the lighting
conditions are not sufficient for traditional cameras to function properly.
Therefore, we exploit the powerful attributes of event-based cameras to perform
obstacle detection in low lighting conditions. Event cameras trigger events
asynchronously at high output temporal rate with high dynamic range of up to
120 $dB$. The algorithm filters background activity noise and extracts objects
using robust Hough transform technique. The depth of each detected object is
computed by triangulating 2D features extracted utilising LC-Harris. Finally,
asynchronous adaptive collision avoidance (AACA) algorithm is applied for
effective avoidance. Qualitative evaluation is compared using event-camera and
traditional camera.
Related papers
- CamLoPA: A Hidden Wireless Camera Localization Framework via Signal Propagation Path Analysis [59.86280992504629]
CamLoPA is a training-free wireless camera detection and localization framework.
It operates with minimal activity space constraints using low-cost commercial-off-the-shelf (COTS) devices.
It achieves 95.37% snooping camera detection accuracy and an average localization error of 17.23, under the significantly reduced activity space requirements.
arXiv Detail & Related papers (2024-09-23T16:23:50Z) - Deep Event-based Object Detection in Autonomous Driving: A Survey [7.197775088663435]
Event cameras have emerged as promising sensors for autonomous driving due to their low latency, high dynamic range, and low power consumption.
This paper provides an overview of object detection using event data in autonomous driving, showcasing the competitive benefits of event cameras.
arXiv Detail & Related papers (2024-05-07T04:17:04Z) - SDGE: Stereo Guided Depth Estimation for 360$^\circ$ Camera Sets [65.64958606221069]
Multi-camera systems are often used in autonomous driving to achieve a 360$circ$ perception.
These 360$circ$ camera sets often have limited or low-quality overlap regions, making multi-view stereo methods infeasible for the entire image.
We propose the Stereo Guided Depth Estimation (SGDE) method, which enhances depth estimation of the full image by explicitly utilizing multi-view stereo results on the overlap.
arXiv Detail & Related papers (2024-02-19T02:41:37Z) - Self-supervised Event-based Monocular Depth Estimation using Cross-modal
Consistency [18.288912105820167]
We propose a self-supervised event-based monocular depth estimation framework named EMoDepth.
EMoDepth constrains the training process using the cross-modal consistency from intensity frames that are aligned with events in the pixel coordinate.
In inference, only events are used for monocular depth prediction.
arXiv Detail & Related papers (2024-01-14T07:16:52Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - DOTIE -- Detecting Objects through Temporal Isolation of Events using a
Spiking Architecture [5.340730281227837]
Vision-based autonomous navigation systems rely on fast and accurate object detection algorithms to avoid obstacles.
We propose a novel technique that utilizes the temporal information inherently present in the events to efficiently detect moving objects.
We show that by utilizing our architecture, autonomous navigation systems can have minimal latency and energy overheads for performing object detection.
arXiv Detail & Related papers (2022-10-03T14:43:11Z) - Globally-Optimal Event Camera Motion Estimation [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in HDR conditions and have high temporal resolution.
Event cameras measure asynchronous pixel-level changes and return them in a highly discretised format.
arXiv Detail & Related papers (2022-03-08T08:24:22Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Asynchronous Corner Tracking Algorithm based on Lifetime of Events for
DAVIS Cameras [0.9988653233188148]
Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones, capture the intensity changes in the scene and generates a stream of events in an asynchronous fashion.
The output rate of such cameras can reach up to 10 million events per second in high dynamic environments.
A novel asynchronous corner tracking method is proposed that uses both events and intensity images captured by a DAVIS camera.
arXiv Detail & Related papers (2020-10-29T12:02:40Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.