Unsupervised Monocular Depth Estimation in Highly Complex Environments
- URL: http://arxiv.org/abs/2107.13137v1
- Date: Wed, 28 Jul 2021 02:35:38 GMT
- Title: Unsupervised Monocular Depth Estimation in Highly Complex Environments
- Authors: Chaoqiang Zhao, Yang Tang and Qiyu Sun
- Abstract summary: Unsupervised monocular depth estimation methods mainly focus on the day-time scenario.
In some challenging environments, like night, rainy night or snowy winter, the photometry of the same pixel on different frames is inconsistent.
We address this challenging problem by using domain adaptation, and a unified image transfer-based adaptation framework is proposed.
- Score: 9.580317751486636
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Previous unsupervised monocular depth estimation methods mainly focus on the
day-time scenario, and their frameworks are driven by warped photometric
consistency. While in some challenging environments, like night, rainy night or
snowy winter, the photometry of the same pixel on different frames is
inconsistent because of the complex lighting and reflection, so that the
day-time unsupervised frameworks cannot be directly applied to these complex
scenarios. In this paper, we investigate the problem of unsupervised monocular
depth estimation in certain highly complex scenarios. We address this
challenging problem by using domain adaptation, and a unified image
transfer-based adaptation framework is proposed based on monocular videos in
this paper. The depth model trained on day-time scenarios is adapted to
different complex scenarios. Instead of adapting the whole depth network, we
just consider the encoder network for lower computational complexity. The depth
models adapted by the proposed framework to different scenarios share the same
decoder, which is practical. Constraints on both feature space and output space
promote the framework to learn the key features for depth decoding, and the
smoothness loss is introduced into the adaptation framework for better depth
estimation performance. Extensive experiments show the effectiveness of the
proposed unsupervised framework in estimating the dense depth map from the
night-time, rainy night-time and snowy winter images.
Related papers
- Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - Dusk Till Dawn: Self-supervised Nighttime Stereo Depth Estimation using Visual Foundation Models [16.792458193160407]
Self-supervised depth estimation algorithms rely heavily on frame-warping relationships.
We introduce an algorithm designed to achieve accurate self-supervised stereo depth estimation focusing on nighttime conditions.
arXiv Detail & Related papers (2024-05-18T03:07:23Z) - Unveiling the Depths: A Multi-Modal Fusion Framework for Challenging
Scenarios [103.72094710263656]
This paper presents a novel approach that identifies and integrates dominant cross-modality depth features with a learning-based framework.
We propose a novel confidence loss steering a confidence predictor network to yield a confidence map specifying latent potential depth areas.
With the resulting confidence map, we propose a multi-modal fusion network that fuses the final depth in an end-to-end manner.
arXiv Detail & Related papers (2024-02-19T04:39:16Z) - Learnable Differencing Center for Nighttime Depth Perception [39.455428679154934]
We propose a simple yet effective framework called LDCNet.
Our key idea is to use Recurrent Inter-Convolution Differencing (RICD) and Illumination-Affinitive Intra-Convolution Differencing (IAICD) to enhance the nighttime color images.
On both nighttime depth completion and depth estimation tasks, extensive experiments demonstrate the effectiveness of our LDCNet.
arXiv Detail & Related papers (2023-06-26T09:21:13Z) - Fully Self-Supervised Depth Estimation from Defocus Clue [79.63579768496159]
We propose a self-supervised framework that estimates depth purely from a sparse focal stack.
We show that our framework circumvents the needs for the depth and AIF image ground-truth, and receives superior predictions.
arXiv Detail & Related papers (2023-03-19T19:59:48Z) - Uncertainty Guided Depth Fusion for Spike Camera [49.41822923588663]
We propose a novel Uncertainty-Guided Depth Fusion (UGDF) framework to fuse predictions of monocular and stereo depth estimation networks for spike camera.
Our framework is motivated by the fact that stereo spike depth estimation achieves better results at close range.
In order to demonstrate the advantage of spike depth estimation over traditional camera depth estimation, we contribute a spike-depth dataset named CitySpike20K.
arXiv Detail & Related papers (2022-08-26T13:04:01Z) - A high-precision self-supervised monocular visual odometry in foggy
weather based on robust cycled generative adversarial networks and multi-task
learning aided depth estimation [0.0]
This paper proposes a high-precision self-supervised monocular VO, which is specifically designed for navigation in foggy weather.
A cycled generative adversarial network is designed to obtain high-quality self-supervised loss via forcing the forward and backward half-cycle to output consistent estimation.
gradient-based loss and perceptual loss are introduced to eliminate the interference of complex photometric change on self-supervised loss in foggy weather.
arXiv Detail & Related papers (2022-03-09T15:41:57Z) - Regularizing Nighttime Weirdness: Efficient Self-supervised Monocular
Depth Estimation in the Dark [20.66405067066299]
We introduce Priors-Based Regularization to learn distribution knowledge from unpaired depth maps.
We also leverage Mapping-Consistent Image Enhancement module to enhance image visibility and contrast.
Our framework achieves remarkable improvements and state-of-the-art results on two nighttime datasets.
arXiv Detail & Related papers (2021-08-09T06:24:35Z) - Self-Supervised Monocular Depth Estimation of Untextured Indoor Rotated
Scenes [6.316693022958222]
Self-supervised deep learning methods have leveraged stereo images for training monocular depth estimation.
These methods do not match performance of supervised methods on indoor environments with camera rotation.
We propose a novel Filled Disparity Loss term that corrects for ambiguity of image reconstruction error loss in textureless regions.
arXiv Detail & Related papers (2021-06-24T12:27:16Z) - Adaptive confidence thresholding for monocular depth estimation [83.06265443599521]
We propose a new approach to leverage pseudo ground truth depth maps of stereo images generated from self-supervised stereo matching methods.
The confidence map of the pseudo ground truth depth map is estimated to mitigate performance degeneration by inaccurate pseudo depth maps.
Experimental results demonstrate superior performance to state-of-the-art monocular depth estimation methods.
arXiv Detail & Related papers (2020-09-27T13:26:16Z) - DeFeat-Net: General Monocular Depth via Simultaneous Unsupervised
Representation Learning [65.94499390875046]
DeFeat-Net is an approach to simultaneously learn a cross-domain dense feature representation.
Our technique is able to outperform the current state-of-the-art with around 10% reduction in all error measures.
arXiv Detail & Related papers (2020-03-30T13:10:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.