Distortion-Tolerant Monocular Depth Estimation On Omnidirectional Images
Using Dual-cubemap
- URL: http://arxiv.org/abs/2203.09733v1
- Date: Fri, 18 Mar 2022 04:20:36 GMT
- Title: Distortion-Tolerant Monocular Depth Estimation On Omnidirectional Images
Using Dual-cubemap
- Authors: Zhijie Shen, Chunyu Lin, Lang Nie, Kang Liao, and Yao zhao
- Abstract summary: We propose a distortion-tolerant omnidirectional depth estimation algorithm using a dual-cubemap.
In DCDE module, we present a rotation-based dual-cubemap model to estimate the accurate NFoV depth.
Then a boundary revision module is designed to smooth the discontinuous boundaries, which contributes to the precise and visually continuous omnidirectional depths.
- Score: 37.82642960470551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Estimating the depth of omnidirectional images is more challenging than that
of normal field-of-view (NFoV) images because the varying distortion can
significantly twist an object's shape. The existing methods suffer from
troublesome distortion while estimating the depth of omnidirectional images,
leading to inferior performance. To reduce the negative impact of the
distortion influence, we propose a distortion-tolerant omnidirectional depth
estimation algorithm using a dual-cubemap. It comprises two modules:
Dual-Cubemap Depth Estimation (DCDE) module and Boundary Revision (BR) module.
In DCDE module, we present a rotation-based dual-cubemap model to estimate the
accurate NFoV depth, reducing the distortion at the cost of boundary
discontinuity on omnidirectional depths. Then a boundary revision module is
designed to smooth the discontinuous boundaries, which contributes to the
precise and visually continuous omnidirectional depths. Extensive experiments
demonstrate the superiority of our method over other state-of-the-art
solutions.
Related papers
- Adaptive Stereo Depth Estimation with Multi-Spectral Images Across All Lighting Conditions [58.88917836512819]
We propose a novel framework incorporating stereo depth estimation to enforce accurate geometric constraints.
To mitigate the effects of poor lighting on stereo matching, we introduce Degradation Masking.
Our method achieves state-of-the-art (SOTA) performance on the Multi-Spectral Stereo (MS2) dataset.
arXiv Detail & Related papers (2024-11-06T03:30:46Z) - DiffusionDepth: Diffusion Denoising Approach for Monocular Depth
Estimation [23.22005119986485]
DiffusionDepth is a new approach that reformulates monocular depth estimation as a denoising diffusion process.
It learns an iterative denoising process to denoise' random depth distribution into a depth map with the guidance of monocular visual conditions.
Experimental results on KITTI and NYU-Depth-V2 datasets suggest that a simple yet efficient diffusion approach could reach state-of-the-art performance in both indoor and outdoor scenarios with acceptable inference time.
arXiv Detail & Related papers (2023-03-09T03:48:24Z) - Frequency-Aware Self-Supervised Monocular Depth Estimation [41.97188738587212]
We present two versatile methods to enhance self-supervised monocular depth estimation models.
The high generalizability of our methods is achieved by solving the fundamental and ubiquitous problems in photometric loss function.
We are the first to propose blurring images to improve depth estimators with an interpretable analysis.
arXiv Detail & Related papers (2022-10-11T14:30:26Z) - Non-learning Stereo-aided Depth Completion under Mis-projection via
Selective Stereo Matching [0.5067618621449753]
We propose a non-learning depth completion method for a sparse depth map captured using a light detection and ranging (LiDAR) sensor guided by a pair of stereo images.
The proposed method reduced the mean absolute error (MAE) of the depth estimation to 0.65 times and demonstrated approximately twice more accurate estimation in the long range.
arXiv Detail & Related papers (2022-10-04T07:46:56Z) - On Robust Cross-View Consistency in Self-Supervised Monocular Depth Estimation [56.97699793236174]
We study two kinds of robust cross-view consistency in this paper.
We exploit the temporal coherence in both depth feature space and 3D voxel space for self-supervised monocular depth estimation.
Experimental results on several outdoor benchmarks show that our method outperforms current state-of-the-art techniques.
arXiv Detail & Related papers (2022-09-19T03:46:13Z) - Deep Two-View Structure-from-Motion Revisited [83.93809929963969]
Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM.
We propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline.
Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps.
arXiv Detail & Related papers (2021-04-01T15:31:20Z) - Robust Consistent Video Depth Estimation [65.53308117778361]
We present an algorithm for estimating consistent dense depth maps and camera poses from a monocular video.
Our algorithm combines two complementary techniques: (1) flexible deformation-splines for low-frequency large-scale alignment and (2) geometry-aware depth filtering for high-frequency alignment of fine depth details.
In contrast to prior approaches, our method does not require camera poses as input and achieves robust reconstruction for challenging hand-held cell phone captures containing a significant amount of noise, shake, motion blur, and rolling shutter deformations.
arXiv Detail & Related papers (2020-12-10T18:59:48Z) - Dual Pixel Exploration: Simultaneous Depth Estimation and Image
Restoration [77.1056200937214]
We study the formation of the DP pair which links the blur and the depth information.
We propose an end-to-end DDDNet (DP-based Depth and De Network) to jointly estimate the depth and restore the image.
arXiv Detail & Related papers (2020-12-01T06:53:57Z) - Distortion-aware Monocular Depth Estimation for Omnidirectional Images [26.027353545874522]
We propose a Distortion-Aware Monocular Omnidirectional (DAMO) dense depth estimation network to address this challenge on indoor panoramas.
First, we introduce a distortion-aware module to extract calibrated semantic features from omnidirectional images.
Second, we introduce a plug-and-play spherical-aware weight matrix for our objective function to handle the uneven distribution of areas projected from a sphere.
arXiv Detail & Related papers (2020-10-18T08:47:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.