ESL: Event-based Structured Light
- URL: http://arxiv.org/abs/2111.15510v1
- Date: Tue, 30 Nov 2021 15:47:39 GMT
- Title: ESL: Event-based Structured Light
- Authors: Manasi Muglikar, Guillermo Gallego, Davide Scaramuzza
- Abstract summary: Event cameras are bio-inspired sensors providing significant advantages over standard cameras.
We propose a novel structured-light system using an event camera to tackle the problem of accurate and high-speed depth sensing.
- Score: 62.77144631509817
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are bio-inspired sensors providing significant advantages over
standard cameras such as low latency, high temporal resolution, and high
dynamic range. We propose a novel structured-light system using an event camera
to tackle the problem of accurate and high-speed depth sensing. Our setup
consists of an event camera and a laser-point projector that uniformly
illuminates the scene in a raster scanning pattern during 16 ms. Previous
methods match events independently of each other, and so they deliver noisy
depth estimates at high scanning speeds in the presence of signal latency and
jitter. In contrast, we optimize an energy function designed to exploit event
correlations, called spatio-temporal consistency. The resulting method is
robust to event jitter and therefore performs better at higher scanning speeds.
Experiments demonstrate that our method can deal with high-speed motion and
outperform state-of-the-art 3D reconstruction methods based on event cameras,
reducing the RMSE by 83% on average, for the same acquisition time.
Related papers
- E-3DGS: Gaussian Splatting with Exposure and Motion Events [29.042018288378447]
We propose E-3DGS, a novel event-based approach that partitions events into motion and exposure.
We introduce a novel integration of 3DGS with exposure events for high-quality reconstruction of explicit scene representations.
Our method is faster and delivers better reconstruction quality than event-based NeRF while being more cost-effective than NeRF methods.
arXiv Detail & Related papers (2024-10-22T13:17:20Z) - Deblur e-NeRF: NeRF from Motion-Blurred Events under High-speed or Low-light Conditions [56.84882059011291]
We propose Deblur e-NeRF, a novel method to reconstruct blur-minimal NeRFs from motion-red events.
We also introduce a novel threshold-normalized total variation loss to improve the regularization of large textureless patches.
arXiv Detail & Related papers (2024-09-26T15:57:20Z) - Seeing Motion at Nighttime with an Event Camera [17.355331119296782]
Event cameras react to dynamic with higher temporal resolution (microsecond) and higher dynamic range (120dB)
We propose a nighttime event reconstruction network (NER-Net) which mainly includes a learnable event timestamps calibration module (LETC)
We construct a paired real-light event dataset (RLED) through a co-axial imaging, including 64,200 spatially and temporally aligned image GTs and low-light events.
arXiv Detail & Related papers (2024-04-18T03:58:27Z) - Event Cameras Meet SPADs for High-Speed, Low-Bandwidth Imaging [25.13346470561497]
Event cameras and single-photon avalanche diode (SPAD) sensors have emerged as promising alternatives to conventional cameras.
We show that these properties are complementary, and can help achieve low-light, high-speed image reconstruction with low bandwidth requirements.
arXiv Detail & Related papers (2024-04-17T16:06:29Z) - Event-based Motion-Robust Accurate Shape Estimation for Mixed
Reflectance Scenes [17.446182782836747]
We present a novel event-based structured light system that enables fast 3D imaging of mixed reflectance scenes with high accuracy.
We use epipolar constraints that intrinsically enable the measured reflections into decomposing diffuse, two-bounce specular, and other multi-bounce reflections.
The resulting system achieves fast and motion-robust reconstructions of mixed reflectance scenes with 500 $mu$m accuracy.
arXiv Detail & Related papers (2023-11-16T08:12:10Z) - Robust e-NeRF: NeRF from Sparse & Noisy Events under Non-Uniform Motion [67.15935067326662]
Event cameras offer low power, low latency, high temporal resolution and high dynamic range.
NeRF is seen as the leading candidate for efficient and effective scene representation.
We propose Robust e-NeRF, a novel method to directly and robustly reconstruct NeRFs from moving event cameras.
arXiv Detail & Related papers (2023-09-15T17:52:08Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Globally-Optimal Event Camera Motion Estimation [30.79931004393174]
Event cameras are bio-inspired sensors that perform well in HDR conditions and have high temporal resolution.
Event cameras measure asynchronous pixel-level changes and return them in a highly discretised format.
arXiv Detail & Related papers (2022-03-08T08:24:22Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.