Event Guided Depth Sensing
- URL: http://arxiv.org/abs/2110.10505v1
- Date: Wed, 20 Oct 2021 11:41:11 GMT
- Title: Event Guided Depth Sensing
- Authors: Manasi Muglikar, Diederik Paul Moeys, Davide Scaramuzza
- Abstract summary: We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
- Score: 50.997474285910734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Active depth sensors like structured light, lidar, and time-of-flight systems
sample the depth of the entire scene uniformly at a fixed scan rate. This leads
to limited spatio-temporal resolution where redundant static information is
over-sampled and precious motion information might be under-sampled. In this
paper, we present an efficient bio-inspired event-camera-driven depth
estimation algorithm. In our approach, we dynamically illuminate areas of
interest densely, depending on the scene activity detected by the event camera,
and sparsely illuminate areas in the field of view with no motion. The depth
estimation is achieved by an event-based structured light system consisting of
a laser point projector coupled with a second event-based sensor tuned to
detect the reflection of the laser from the scene. We show the feasibility of
our approach in a simulated autonomous driving scenario and real indoor
sequences using our prototype. We show that, in natural scenes like autonomous
driving and indoor environments, moving edges correspond to less than 10% of
the scene on average. Thus our setup requires the sensor to scan only 10% of
the scene, which could lead to almost 90% less power consumption by the
illumination source. While we present the evaluation and proof-of-concept for
an event-based structured-light system, the ideas presented here are applicable
for a wide range of depth-sensing modalities like LIDAR, time-of-flight, and
standard stereo.
Related papers
- Event-based Motion-Robust Accurate Shape Estimation for Mixed
Reflectance Scenes [17.446182782836747]
We present a novel event-based structured light system that enables fast 3D imaging of mixed reflectance scenes with high accuracy.
We use epipolar constraints that intrinsically enable the measured reflections into decomposing diffuse, two-bounce specular, and other multi-bounce reflections.
The resulting system achieves fast and motion-robust reconstructions of mixed reflectance scenes with 500 $mu$m accuracy.
arXiv Detail & Related papers (2023-11-16T08:12:10Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - PL-EVIO: Robust Monocular Event-based Visual Inertial Odometry with
Point and Line Features [3.6355269783970394]
Event cameras are motion-activated sensors that capture pixel-level illumination changes instead of the intensity image with a fixed frame rate.
We propose a robust, high-accurate, and real-time optimization-based monocular event-based visual-inertial odometry (VIO) method.
arXiv Detail & Related papers (2022-09-25T06:14:12Z) - SurroundDepth: Entangling Surrounding Views for Self-Supervised
Multi-Camera Depth Estimation [101.55622133406446]
We propose a SurroundDepth method to incorporate the information from multiple surrounding views to predict depth maps across cameras.
Specifically, we employ a joint network to process all the surrounding views and propose a cross-view transformer to effectively fuse the information from multiple views.
In experiments, our method achieves the state-of-the-art performance on the challenging multi-camera depth estimation datasets.
arXiv Detail & Related papers (2022-04-07T17:58:47Z) - Gated2Gated: Self-Supervised Depth Estimation from Gated Images [22.415893281441928]
Gated cameras hold promise as an alternative to scanning LiDAR sensors with high-resolution 3D depth.
We propose an entirely self-supervised depth estimation method that uses gated intensity profiles and temporal consistency as a training signal.
arXiv Detail & Related papers (2021-12-04T19:47:38Z) - ESL: Event-based Structured Light [62.77144631509817]
Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
arXiv Detail & Related papers (2021-11-30T15:47:39Z) - TUM-VIE: The TUM Stereo Visual-Inertial Event Dataset [50.8779574716494]
Event cameras are bio-inspired vision sensors which measure per pixel brightness changes.
They offer numerous benefits over traditional, frame-based cameras, including low latency, high dynamic range, high temporal resolution and low power consumption.
To foster the development of 3D perception and navigation algorithms with event cameras, we present the TUM-VIE dataset.
arXiv Detail & Related papers (2021-08-16T19:53:56Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Event-based Stereo Visual Odometry [42.77238738150496]
We present a solution to the problem of visual odometry from the data acquired by a stereo event-based camera rig.
We seek to maximize thetemporal consistency of stereo event-based data while using a simple and efficient representation.
arXiv Detail & Related papers (2020-07-30T15:53:28Z) - Depth Map Estimation of Dynamic Scenes Using Prior Depth Information [14.03714478207425]
We propose an algorithm that estimates depth maps using concurrently collected images and a previously measured depth map for dynamic scenes.
Our goal is to balance the acquisition of depth between the active depth sensor and computation, without incurring a large computational cost.
Our approach can obtain dense depth maps at up to real-time (30 FPS) on a standard laptop computer, which is orders of magnitude faster than similar approaches.
arXiv Detail & Related papers (2020-02-02T01:04:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.