Self-Supervised Monocular Depth Estimation in the Dark: Towards Data Distribution Compensation
- URL: http://arxiv.org/abs/2404.13854v1
- Date: Mon, 22 Apr 2024 03:39:03 GMT
- Title: Self-Supervised Monocular Depth Estimation in the Dark: Towards Data Distribution Compensation
- Authors: Haolin Yang, Chaoqiang Zhao, Lu Sheng, Yang Tang,
- Abstract summary: Using night images for self-supervision is unreliable because the photometric consistency assumption is usually violated in the videos taken under complex lighting conditions.
We propose a self-supervised nighttime monocular depth estimation method that does not use any night images during training.
- Score: 24.382795861986803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Nighttime self-supervised monocular depth estimation has received increasing attention in recent years. However, using night images for self-supervision is unreliable because the photometric consistency assumption is usually violated in the videos taken under complex lighting conditions. Even with domain adaptation or photometric loss repair, performance is still limited by the poor supervision of night images on trainable networks. In this paper, we propose a self-supervised nighttime monocular depth estimation method that does not use any night images during training. Our framework utilizes day images as a stable source for self-supervision and applies physical priors (e.g., wave optics, reflection model and read-shot noise model) to compensate for some key day-night differences. With day-to-night data distribution compensation, our framework can be trained in an efficient one-stage self-supervised manner. Though no nighttime images are considered during training, qualitative and quantitative results demonstrate that our method achieves SoTA depth estimating results on the challenging nuScenes-Night and RobotCar-Night compared with existing methods.
Related papers
- Night-to-Day Translation via Illumination Degradation Disentanglement [51.77716565167767]
Night-to-Day translation aims to achieve day-like vision for nighttime scenes.
processing night images with complex degradations remains a significant challenge under unpaired conditions.
We propose textbfN2D3 to identify different degradation patterns in nighttime images.
arXiv Detail & Related papers (2024-11-21T08:51:32Z) - Exploring Reliable Matching with Phase Enhancement for Night-time Semantic Segmentation [58.180226179087086]
We propose a novel end-to-end optimized approach, named NightFormer, tailored for night-time semantic segmentation.
Specifically, we design a pixel-level texture enhancement module to acquire texture-aware features hierarchically with phase enhancement and amplified attention.
Our proposed method performs favorably against state-of-the-art night-time semantic segmentation methods.
arXiv Detail & Related papers (2024-08-25T13:59:31Z) - Robust Monocular Depth Estimation under Challenging Conditions [81.57697198031975]
State-of-the-art monocular depth estimation approaches are highly unreliable under challenging illumination and weather conditions.
We tackle these safety-critical issues with md4all: a simple and effective solution that works reliably under both adverse and ideal conditions.
arXiv Detail & Related papers (2023-08-18T17:59:01Z) - Disentangled Contrastive Image Translation for Nighttime Surveillance [87.03178320662592]
Nighttime surveillance suffers from degradation due to poor illumination and arduous human annotations.
Existing methods rely on multi-spectral images to perceive objects in the dark, which are troubled by low resolution and color absence.
We argue that the ultimate solution for nighttime surveillance is night-to-day translation, or Night2Day.
This paper contributes a new surveillance dataset called NightSuR. It includes six scenes to support the study on nighttime surveillance.
arXiv Detail & Related papers (2023-07-11T06:40:27Z) - STEPS: Joint Self-supervised Nighttime Image Enhancement and Depth
Estimation [12.392842482031558]
We propose a method that jointly learns a nighttime image enhancer and a depth estimator, without using ground truth for either task.
Our method tightly entangles two self-supervised tasks using a newly proposed uncertain pixel masking strategy.
We benchmark the method on two established datasets: nuScenes and RobotCar.
arXiv Detail & Related papers (2023-02-02T18:59:47Z) - When the Sun Goes Down: Repairing Photometric Losses for All-Day Depth
Estimation [47.617222712429026]
We show how to use a combination of three techniques to allow the existing photometric losses to work for both day and nighttime images.
First, we introduce a per-pixel neural intensity transformation to compensate for the light changes that occur between successive frames.
Second, we predict a per-pixel residual flow map that we use to correct the reprojection correspondences induced by the estimated ego-motion and depth.
arXiv Detail & Related papers (2022-06-28T09:29:55Z) - Self-supervised Monocular Depth Estimation for All Day Images using
Domain Separation [17.066753214406525]
We propose a domain-separated network for self-supervised depth estimation of all-day images.
Our approach achieves state-of-the-art depth estimation results for all-day images on the challenging Oxford RobotCar dataset.
arXiv Detail & Related papers (2021-08-17T13:52:19Z) - Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular
Depth Estimation in the Dark [20.66405067066299]
We introduce Priors-Based Regularization to learn distribution knowledge from unpaired depth maps.
We also leverage Mapping-Consistent Image Enhancement module to enhance image visibility and contrast.
Our framework achieves remarkable improvements and state-of-the-art results on two nighttime datasets.
arXiv Detail & Related papers (2021-08-09T06:24:35Z) - Unsupervised Monocular Depth Estimation for Night-time Images using
Adversarial Domain Feature Adaptation [17.067988025947024]
We look into the problem of estimating per-pixel depth maps from unconstrained RGB monocular night-time images.
The state-of-the-art day-time depth estimation methods fail miserably when tested with night-time images.
We propose to solve this problem by posing it as a domain adaptation problem where a network trained with day-time images is adapted to work for night-time images.
arXiv Detail & Related papers (2020-10-03T17:55:16Z) - Map-Guided Curriculum Domain Adaptation and Uncertainty-Aware Evaluation
for Semantic Nighttime Image Segmentation [107.33492779588641]
We develop a curriculum framework to adapt semantic segmentation models from day to night without using nighttime annotations.
We also design a new evaluation framework to address the substantial uncertainty of semantics in nighttime images.
arXiv Detail & Related papers (2020-05-28T16:54:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.