Learnable Differencing Center for Nighttime Depth Perception
- URL: http://arxiv.org/abs/2306.14538v4
- Date: Mon, 4 Sep 2023 15:35:19 GMT
- Title: Learnable Differencing Center for Nighttime Depth Perception
- Authors: Zhiqiang Yan and Yupeng Zheng and Chongyi Li and Jun Li and Jian Yang
- Abstract summary: We propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the nighttime color images.
On both nighttime depth completion and depth estimation tasks, extensive experiments demonstrate the effectiveness of our LDCNet.
- Score: 39.455428679154934
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth completion is the task of recovering dense depth maps from sparse ones,
usually with the help of color images. Existing image-guided methods perform
well on daytime depth perception self-driving benchmarks, but struggle in
nighttime scenarios with poor visibility and complex illumination. To address
these challenges, we propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and
Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the
nighttime color images and reduce the negative effects of the varying
illumination, respectively. RICD explicitly estimates global illumination by
differencing two convolutions with different kernels, treating the
small-kernel-convolution feature as the center of the large-kernel-convolution
feature in a new perspective. IAICD softly alleviates local relative light
intensity by differencing a single convolution, where the center is dynamically
aggregated based on neighboring pixels and the estimated illumination map in
RICD. On both nighttime depth completion and depth estimation tasks, extensive
experiments demonstrate the effectiveness of our LDCNet, reaching the state of
the art.
Related papers
- Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios [103.72094710263656]
This paper presents a novel approach that identifies and integrates dominant cross-modality depth features with a learning-based framework.
We propose a novel confidence loss steering a confidence predictor network to yield a confidence map specifying latent potential depth areas.
With the resulting confidence map, we propose a multi-modal fusion network that fuses the final depth in an end-to-end manner.
arXiv Detail & Related papers (2024-02-19T04:39:16Z) - Low-Light Image Enhancement with Illumination-Aware Gamma Correction and
Complete Image Modelling Network [69.96295927854042]
Low-light environments usually lead to less informative large-scale dark areas.
We propose to integrate the effectiveness of gamma correction with the strong modelling capacities of deep networks.
Because exponential operation introduces high computational complexity, we propose to use Taylor Series to approximate gamma correction.
arXiv Detail & Related papers (2023-08-16T08:46:51Z) - Enhancing Low-light Light Field Images with A Deep Compensation Unfolding Network [52.77569396659629]
This paper presents the deep compensation network unfolding (DCUNet) for restoring light field (LF) images captured under low-light conditions.
The framework uses the intermediate enhanced result to estimate the illumination map, which is then employed in the unfolding process to produce a new enhanced result.
To properly leverage the unique characteristics of LF images, this paper proposes a pseudo-explicit feature interaction module.
arXiv Detail & Related papers (2023-08-10T07:53:06Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Unsupervised Visible-light Images Guided Cross-Spectrum Depth Estimation
from Dual-Modality Cameras [33.77748026254935]
Cross-spectrum depth estimation aims to provide a depth map in all illumination conditions with a pair of dual-spectrum images.
In this paper, we propose an unsupervised visible-light image guided cross-spectrum (i.e., thermal and visible-light, TIR-VIS in short) depth estimation framework.
Our method achieves better performance than the compared existing methods.
arXiv Detail & Related papers (2022-04-30T12:58:35Z) - Wild ToFu: Improving Range and Quality of Indirect Time-of-Flight Depth
with RGB Fusion in Challenging Environments [56.306567220448684]
We propose a new learning based end-to-end depth prediction network which takes noisy raw I-ToF signals as well as an RGB image.
We show more than 40% RMSE improvement on the final depth map compared to the baseline approach.
arXiv Detail & Related papers (2021-12-07T15:04:14Z) - Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular
Depth Estimation in the Dark [20.66405067066299]
We introduce Priors-Based Regularization to learn distribution knowledge from unpaired depth maps.
We also leverage Mapping-Consistent Image Enhancement module to enhance image visibility and contrast.
Our framework achieves remarkable improvements and state-of-the-art results on two nighttime datasets.
arXiv Detail & Related papers (2021-08-09T06:24:35Z) - Unsupervised Monocular Depth Estimation in Highly Complex Environments [9.580317751486636]
Unsupervised monocular depth estimation methods mainly focus on the day-time scenario.
In some challenging environments, like night, rainy night or snowy winter, the photometry of the same pixel on different frames is inconsistent.
We address this challenging problem by using domain adaptation, and a unified image transfer-based adaptation framework is proposed.
arXiv Detail & Related papers (2021-07-28T02:35:38Z) - Bridge the Vision Gap from Field to Command: A Deep Learning Network
Enhancing Illumination and Details [17.25188250076639]
We propose a two-stream framework named NEID to tune up the brightness and enhance the details simultaneously.
The proposed method consists of three parts: Light Enhancement (LE), Detail Refinement (DR) and Feature Fusing (FF) module.
arXiv Detail & Related papers (2021-01-20T09:39:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.