Monocular Depth Estimation with Sharp Boundary
- URL: http://arxiv.org/abs/2110.05885v1
- Date: Tue, 12 Oct 2021 10:55:12 GMT
- Title: Monocular Depth Estimation with Sharp Boundary
- Authors: Xin Yang, Qingling Chang, Xinlin Liu, and Yan Cui
- Abstract summary: Boundary blur problem is mainly caused by two factors, first, the low-level features containing boundary and structure information may loss in deeper networks during the convolution process.
Second, the model ignores the errors introduced by the boundary area due to the few portions of the boundary in the whole areas during the backpropagation.
We propose a boundary-aware depth loss function to pay attention to the effects of the boundary's depth value.
- Score: 4.873879696568641
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Monocular depth estimation is the base task in computer vision. It has a
tremendous development in the decade with the development of deep learning. But
the boundary blur of the depth map is still a serious problem. Research finds
the boundary blur problem is mainly caused by two factors, first, the low-level
features containing boundary and structure information may loss in deeper
networks during the convolution process., second, the model ignores the errors
introduced by the boundary area due to the few portions of the boundary in the
whole areas during the backpropagation. In order to mitigate the boundary blur
problem, we focus on the above two impact factors. Firstly, we design a scene
understanding module to learn the global information with low- and high-level
features, and then to transform the global information to different scales with
our proposed scale transform module according to the different phases in the
decoder. Secondly, we propose a boundary-aware depth loss function to pay
attention to the effects of the boundary's depth value. The extensive
experiments show that our method can predict the depth maps with clearer
boundaries, and the performance of the depth accuracy base on NYU-depth v2 and
SUN RGB-D is competitive.
Related papers
- Pyramid Feature Attention Network for Monocular Depth Prediction [8.615717738037823]
We propose a Pyramid Feature Attention Network (PFANet) to improve the high-level context features and low-level spatial features.
Our method outperforms state-of-the-art methods on the KITTI dataset.
arXiv Detail & Related papers (2024-03-03T08:33:23Z) - Non-parametric Depth Distribution Modelling based Depth Inference for
Multi-view Stereo [43.415242967722804]
Recent cost volume pyramid based deep neural networks have unlocked the potential of efficiently leveraging high-resolution images for depth inference from multi-view stereo.
In general, those approaches assume that the depth of each pixel follows a unimodal distribution.
We propose constructing the cost volume by non-parametric depth distribution modeling to handle pixels with unimodal and multi-modal distributions.
arXiv Detail & Related papers (2022-05-08T05:13:04Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - Weakly-Supervised Monocular Depth Estimationwith Resolution-Mismatched
Data [73.9872931307401]
We propose a novel weakly-supervised framework to train a monocular depth estimation network.
The proposed framework is composed of a sharing weight monocular depth estimation network and a depth reconstruction network for distillation.
Experimental results demonstrate that our method achieves superior performance than unsupervised and semi-supervised learning based schemes.
arXiv Detail & Related papers (2021-09-23T18:04:12Z) - BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation [60.34562823470874]
We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
arXiv Detail & Related papers (2021-07-27T01:28:23Z) - Deep Two-View Structure-from-Motion Revisited [83.93809929963969]
Two-view structure-from-motion (SfM) is the cornerstone of 3D reconstruction and visual SLAM.
We propose to revisit the problem of deep two-view SfM by leveraging the well-posedness of the classic pipeline.
Our method consists of 1) an optical flow estimation network that predicts dense correspondences between two frames; 2) a normalized pose estimation module that computes relative camera poses from the 2D optical flow correspondences, and 3) a scale-invariant depth estimation network that leverages epipolar geometry to reduce the search space, refine the dense correspondences, and estimate relative depth maps.
arXiv Detail & Related papers (2021-04-01T15:31:20Z) - Boundary-induced and scene-aggregated network for monocular depth
prediction [20.358133522462513]
We propose the Boundary-induced and Scene-aggregated network (BS-Net) to predict the dense depth of a single RGB image.
Several experimental results on the NYUD v2 dataset and xffthe iBims-1 dataset illustrate the state-of-the-art performance of the proposed approach.
arXiv Detail & Related papers (2021-02-26T01:43:17Z) - Multi-Scale Progressive Fusion Learning for Depth Map Super-Resolution [11.072332820377612]
The resolution of depth map collected by depth camera is often lower than that of its associated RGB camera.
A major problem with depth map super-resolution is that there will be obvious jagged edges and excessive loss of details.
We propose a multi-scale progressive fusion network for depth map SR, which possess an structure to integrate hierarchical features in different domains.
arXiv Detail & Related papers (2020-11-24T03:03:07Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z) - DPANet: Depth Potentiality-Aware Gated Attention Network for RGB-D
Salient Object Detection [107.96418568008644]
We propose a novel network named DPANet to explicitly model the potentiality of the depth map and effectively integrate the cross-modal complementarity.
By introducing the depth potentiality perception, the network can perceive the potentiality of depth information in a learning-based manner.
arXiv Detail & Related papers (2020-03-19T07:27:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.