LED: Light Enhanced Depth Estimation at Night
- URL: http://arxiv.org/abs/2409.08031v2
- Date: Fri, 18 Oct 2024 12:22:11 GMT
- Title: LED: Light Enhanced Depth Estimation at Night
- Authors: Simon de Moreau, Yasser Almehio, Andrei Bursuc, Hafid El-Idrissi, Bogdan Stanciulescu, Fabien Moutarde,
- Abstract summary: We introduce Light Enhanced Depth (LED), a novel cost-effective approach that significantly improves depth estimation in low-light environments.
LED harnesses a pattern projected by high definition headlights available in modern vehicles.
We release the Nighttime Synthetic Drive dataset, which comprises 49,990 comprehensively annotated images.
- Score: 10.941842055797125
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Nighttime camera-based depth estimation is a highly challenging task, especially for autonomous driving applications, where accurate depth perception is essential for ensuring safe navigation. We aim to improve the reliability of perception systems at night time, where models trained on daytime data often fail in the absence of precise but costly LiDAR sensors. In this work, we introduce Light Enhanced Depth (LED), a novel cost-effective approach that significantly improves depth estimation in low-light environments by harnessing a pattern projected by high definition headlights available in modern vehicles. LED leads to significant performance boosts across multiple depth-estimation architectures (encoder-decoder, Adabins, DepthFormer) both on synthetic and real datasets. Furthermore, increased performances beyond illuminated areas reveal a holistic enhancement in scene understanding. Finally, we release the Nighttime Synthetic Drive Dataset, a new synthetic and photo-realistic nighttime dataset, which comprises 49,990 comprehensively annotated images.
Related papers
- Depth on Demand: Streaming Dense Depth from a Low Frame Rate Active Sensor [31.118441783431177]
Depth on Demand (DoD) allows for accurate temporal and spatial depth densification achieved by exploiting a high frame rate RGB sensor coupled with a potentially lower frame rate and sparse active depth sensor.
Our proposal jointly enables lower energy consumption and denser shape reconstruction, by significantly reducing the streaming requirements on the depth sensor.
We present extended evidence assessing the effectiveness of DoD on indoor and outdoor video datasets, covering both environment scanning and automotive perception use cases.
arXiv Detail & Related papers (2024-09-12T17:59:46Z) - SelfReDepth: Self-Supervised Real-Time Depth Restoration for Consumer-Grade Sensors [42.48726526726542]
SelfReDepth is a self-supervised deep learning technique for depth restoration.
It uses multiple sequential depth frames and color data to achieve high-quality depth videos with temporal coherence.
Our results demonstrate our approach's real-time performance on real-world datasets.
arXiv Detail & Related papers (2024-06-05T15:38:02Z) - Light the Night: A Multi-Condition Diffusion Framework for Unpaired Low-Light Enhancement in Autonomous Driving [45.97279394690308]
LightDiff is a framework designed to enhance the low-light image quality for autonomous driving applications.
It incorporates a novel multi-condition adapter that adaptively controls the input weights from different modalities, including depth maps, RGB images, and text captions.
It can significantly improve the performance of several state-of-the-art 3D detectors in night-time conditions while achieving high visual quality scores.
arXiv Detail & Related papers (2024-04-07T04:10:06Z) - Robust Depth Enhancement via Polarization Prompt Fusion Tuning [112.88371907047396]
We present a framework that leverages polarization imaging to improve inaccurate depth measurements from various depth sensors.
Our method first adopts a learning-based strategy where a neural network is trained to estimate a dense and complete depth map from polarization data and a sensor depth map from different sensors.
To further improve the performance, we propose a Polarization Prompt Fusion Tuning (PPFT) strategy to effectively utilize RGB-based models pre-trained on large-scale datasets.
arXiv Detail & Related papers (2024-04-05T17:55:33Z) - Multi-Modal Neural Radiance Field for Monocular Dense SLAM with a
Light-Weight ToF Sensor [58.305341034419136]
We present the first dense SLAM system with a monocular camera and a light-weight ToF sensor.
We propose a multi-modal implicit scene representation that supports rendering both the signals from the RGB camera and light-weight ToF sensor.
Experiments demonstrate that our system well exploits the signals of light-weight ToF sensors and achieves competitive results.
arXiv Detail & Related papers (2023-08-28T07:56:13Z) - Learnable Differencing Center for Nighttime Depth Perception [39.455428679154934]
We propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the nighttime color images.
On both nighttime depth completion and depth estimation tasks, extensive experiments demonstrate the effectiveness of our LDCNet.
arXiv Detail & Related papers (2023-06-26T09:21:13Z) - Spatiotemporally Consistent HDR Indoor Lighting Estimation [66.26786775252592]
We propose a physically-motivated deep learning framework to solve the indoor lighting estimation problem.
Given a single LDR image with a depth map, our method predicts spatially consistent lighting at any given image position.
Our framework achieves photorealistic lighting prediction with higher quality compared to state-of-the-art single-image or video-based methods.
arXiv Detail & Related papers (2023-05-07T20:36:29Z) - Event Guided Depth Sensing [50.997474285910734]
We present an efficient bio-inspired event-camera-driven depth estimation algorithm.
In our approach, we illuminate areas of interest densely, depending on the scene activity detected by the event camera.
We show the feasibility of our approach in a simulated autonomous driving sequences and real indoor environments.
arXiv Detail & Related papers (2021-10-20T11:41:11Z) - Multi-Modal Depth Estimation Using Convolutional Neural Networks [0.8701566919381223]
This paper addresses the problem of dense depth predictions from sparse distance sensor data and a single camera image on challenging weather conditions.
It explores the significance of different sensor modalities such as camera, Radar, and Lidar for estimating depth by applying Deep Learning approaches.
arXiv Detail & Related papers (2020-12-17T15:31:49Z) - Depth Sensing Beyond LiDAR Range [84.19507822574568]
We propose a novel three-camera system that utilizes small field of view cameras.
Our system, along with our novel algorithm for computing metric depth, does not require full pre-calibration.
It can output dense depth maps with practically acceptable accuracy for scenes and objects at long distances.
arXiv Detail & Related papers (2020-04-07T00:09:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.