EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse
Night Conditions
- URL: http://arxiv.org/abs/2302.03860v1
- Date: Wed, 8 Feb 2023 03:35:47 GMT
- Title: EVEN: An Event-Based Framework for Monocular Depth Estimation at Adverse
Night Conditions
- Authors: Peilun Shi, Jiachuan Peng, Jianing Qiu, Xinwei Ju, Frank Po Wen Lo,
and Benny Lo
- Abstract summary: We study monocular depth estimation at night time in which various adverse weather, light, and different road conditions exist.
We propose an event-vision based framework that integrates low-light enhancement for the RGB source, and exploits the merits of RGB and event data.
- Score: 14.390463371184566
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate depth estimation under adverse night conditions has practical impact
and applications, such as on autonomous driving and rescue robots. In this
work, we studied monocular depth estimation at night time in which various
adverse weather, light, and different road conditions exist, with data captured
in both RGB and event modalities. Event camera can better capture intensity
changes by virtue of its high dynamic range (HDR), which is particularly
suitable to be applied at adverse night conditions in which the amount of light
is limited in the scene. Although event data can retain visual perception that
conventional RGB camera may fail to capture, the lack of texture and color
information of event data hinders its applicability to accurately estimate
depth alone. To tackle this problem, we propose an event-vision based framework
that integrates low-light enhancement for the RGB source, and exploits the
complementary merits of RGB and event data. A dataset that includes paired RGB
and event streams, and ground truth depth maps has been constructed.
Comprehensive experiments have been conducted, and the impact of different
adverse weather combinations on the performance of framework has also been
investigated. The results have shown that our proposed framework can better
estimate monocular depth at adverse nights than six baselines.
Related papers
- Dark-EvGS: Event Camera as an Eye for Radiance Field in the Dark [51.68144172958247]
We propose Dark-EvGS, the first event-assisted 3D GS framework that enables the reconstruction of bright frames from arbitrary viewpoints.<n>Our method achieves better results than existing methods, conquering radiance field reconstruction under challenging low-light conditions.
arXiv Detail & Related papers (2025-07-16T05:54:33Z) - Event-RGB Fusion for Spacecraft Pose Estimation Under Harsh Lighting [20.59391413816475]
Spacecraft pose estimation is crucial for autonomous in-space operations, such as rendezvous, docking and on-orbit servicing.<n> Vision-based pose estimation methods, which typically employ RGB imaging sensors, are challenged by harsh lighting conditions.<n>This work introduces a sensor fusion approach combining RGB and event sensors.
arXiv Detail & Related papers (2025-07-08T06:11:42Z) - RASMD: RGB And SWIR Multispectral Driving Dataset for Robust Perception in Adverse Conditions [0.3141085922386211]
Short-wave infrared (SWIR) imaging offers several advantages over NIR and LWIR.
Current autonomous driving algorithms heavily rely on the visible spectrum, which is prone to performance degradation in adverse conditions.
We introduce the RGB and SWIR Multispectral Driving dataset, which comprises 100,000 synchronized and spatially aligned RGB-SWIR image pairs.
arXiv Detail & Related papers (2025-04-10T09:54:57Z) - LED: Light Enhanced Depth Estimation at Night [10.941842055797125]
We introduce Light Enhanced Depth (LED), a novel cost-effective approach that significantly improves depth estimation in low-light environments.
LED harnesses a pattern projected by high definition headlights available in modern vehicles.
We release the Nighttime Synthetic Drive dataset, which comprises 49,990 comprehensively annotated images.
arXiv Detail & Related papers (2024-09-12T13:23:24Z) - Complementing Event Streams and RGB Frames for Hand Mesh Reconstruction [51.87279764576998]
We propose EvRGBHand -- the first approach for 3D hand mesh reconstruction with an event camera and an RGB camera compensating for each other.
EvRGBHand can tackle overexposure and motion blur issues in RGB-based HMR and foreground scarcity and background overflow issues in event-based HMR.
arXiv Detail & Related papers (2024-03-12T06:04:50Z) - Self-supervised Event-based Monocular Depth Estimation using Cross-modal
Consistency [18.288912105820167]
We propose a self-supervised event-based monocular depth estimation framework named EMoDepth.
EMoDepth constrains the training process using the cross-modal consistency from intensity frames that are aligned with events in the pixel coordinate.
In inference, only events are used for monocular depth prediction.
arXiv Detail & Related papers (2024-01-14T07:16:52Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Chasing Day and Night: Towards Robust and Efficient All-Day Object Detection Guided by an Event Camera [8.673063170884591]
EOLO is a novel object detection framework that achieves robust and efficient all-day detection by fusing both RGB and event modalities.
Our EOLO framework is built based on a lightweight spiking neural network (SNN) to efficiently leverage the asynchronous property of events.
arXiv Detail & Related papers (2023-09-17T15:14:01Z) - Deformable Neural Radiance Fields using RGB and Event Cameras [65.40527279809474]
We develop a novel method to model the deformable neural radiance fields using RGB and event cameras.
The proposed method uses the asynchronous stream of events and sparse RGB frames.
Experiments conducted on both realistically rendered graphics and real-world datasets demonstrate a significant benefit of the proposed method.
arXiv Detail & Related papers (2023-09-15T14:19:36Z) - Multi Visual Modality Fall Detection Dataset [4.00152916049695]
Falls are one of the leading cause of injury-related deaths among the elderly worldwide.
Effective detection of falls can reduce the risk of complications and injuries.
Video cameras provide a passive alternative; however, regular RGB cameras are impacted by changing lighting conditions and privacy concerns.
arXiv Detail & Related papers (2022-06-25T21:54:26Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Drone-based RGB-Infrared Cross-Modality Vehicle Detection via
Uncertainty-Aware Learning [59.19469551774703]
Drone-based vehicle detection aims at finding the vehicle locations and categories in an aerial image.
We construct a large-scale drone-based RGB-Infrared vehicle detection dataset, termed DroneVehicle.
Our DroneVehicle collects 28, 439 RGB-Infrared image pairs, covering urban roads, residential areas, parking lots, and other scenarios from day to night.
arXiv Detail & Related papers (2020-03-05T05:29:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.