Boundary-induced and scene-aggregated network for monocular depth
prediction
- URL: http://arxiv.org/abs/2102.13258v1
- Date: Fri, 26 Feb 2021 01:43:17 GMT
- Title: Boundary-induced and scene-aggregated network for monocular depth
prediction
- Authors: Feng Xue and Junfeng Cao and Yu Zhou and Fei Sheng and Yankai Wang and
Anlong Ming
- Abstract summary: We propose the Boundary-induced and Scene-aggregated network (BS-Net) to predict the dense depth of a single RGB image.
Several experimental results on the NYUD v2 dataset and xffthe iBims-1 dataset illustrate the state-of-the-art performance of the proposed approach.
- Score: 20.358133522462513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Monocular depth prediction is an important task in scene understanding. It
aims to predict the dense depth of a single RGB image. With the development of
deep learning, the performance of this task has made great improvements.
However, two issues remain unresolved: (1) The deep feature encodes the wrong
farthest region in a scene, which leads to a distorted 3D structure of the
predicted depth; (2) The low-level features are insufficient utilized, which
makes it even harder to estimate the depth near the edge with sudden depth
change. To tackle these two issues, we propose the Boundary-induced and
Scene-aggregated network (BS-Net). In this network, the Depth Correlation
Encoder (DCE) is first designed to obtain the contextual correlations between
the regions in an image, and perceive the farthest region by considering the
correlations. Meanwhile, the Bottom-Up Boundary Fusion (BUBF) module is
designed to extract accurate boundary that indicates depth change. Finally, the
Stripe Refinement module (SRM) is designed to refine the dense depth induced by
the boundary cue, which improves the boundary accuracy of the predicted depth.
Several experimental results on the NYUD v2 dataset and \xff{the iBims-1
dataset} illustrate the state-of-the-art performance of the proposed approach.
And the SUN-RGBD dataset is employed to evaluate the generalization of our
method. Code is available at https://github.com/XuefengBUPT/BS-Net.
Related papers
- Progressive Depth Decoupling and Modulating for Flexible Depth Completion [28.693100885012008]
Image-guided depth completion aims at generating a dense depth map from sparse LiDAR data and RGB image.
Recent methods have shown promising performance by reformulating it as a classification problem with two sub-tasks: depth discretization and probability prediction.
We propose a progressive depth decoupling and modulating network, which incrementally decouples the depth range into bins and adaptively generates multi-scale dense depth maps.
arXiv Detail & Related papers (2024-05-15T13:45:33Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - Monocular Depth Estimation with Sharp Boundary [4.873879696568641]
Boundary blur problem is mainly caused by two factors, first, the low-level features containing boundary and structure information may loss in deeper networks during the convolution process.
Second, the model ignores the errors introduced by the boundary area due to the few portions of the boundary in the whole areas during the backpropagation.
We propose a boundary-aware depth loss function to pay attention to the effects of the boundary's depth value.
arXiv Detail & Related papers (2021-10-12T10:55:12Z) - Depth Completion using Plane-Residual Representation [84.63079529738924]
We introduce a novel way of interpreting depth information with the closest depth plane label $p$ and a residual value $r$, as we call it, Plane-Residual (PR) representation.
By interpreting depth information in PR representation and using our corresponding depth completion network, we were able to acquire improved depth completion performance with faster computation.
arXiv Detail & Related papers (2021-04-15T10:17:53Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - PLADE-Net: Towards Pixel-Level Accuracy for Self-Supervised Single-View
Depth Estimation with Neural Positional Encoding and Distilled Matting Loss [49.66736599668501]
We propose a self-supervised single-view pixel-level accurate depth estimation network, called PLADE-Net.
Our method shows unprecedented accuracy levels, exceeding 95% in terms of the $delta1$ metric on the KITTI dataset.
arXiv Detail & Related papers (2021-03-12T15:54:46Z) - Direct Depth Learning Network for Stereo Matching [79.3665881702387]
A novel Direct Depth Learning Network (DDL-Net) is designed for stereo matching.
DDL-Net consists of two stages: the Coarse Depth Estimation stage and the Adaptive-Grained Depth Refinement stage.
We show that DDL-Net achieves an average improvement of 25% on the SceneFlow dataset and $12%$ on the DrivingStereo dataset.
arXiv Detail & Related papers (2020-12-10T10:33:57Z) - Deep Multi-view Depth Estimation with Predicted Uncertainty [11.012201499666503]
We employ a dense-optical-flow network to compute correspondences and then triangulate the point cloud to obtain an initial depth map.
To further increase the triangulation accuracy, we introduce a depth-refinement network (DRN) that optimize the initial depth map based on the image's contextual cues.
arXiv Detail & Related papers (2020-11-19T00:22:09Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z) - Guiding Monocular Depth Estimation Using Depth-Attention Volume [38.92495189498365]
We propose guiding depth estimation to favor planar structures that are ubiquitous especially in indoor environments.
Experiments on two popular indoor datasets, NYU-Depth-v2 and ScanNet, show that our method achieves state-of-the-art depth estimation results.
arXiv Detail & Related papers (2020-04-06T15:45:52Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.