PENet: Towards Precise and Efficient Image Guided Depth Completion
- URL: http://arxiv.org/abs/2103.00783v2
- Date: Thu, 4 Mar 2021 02:27:07 GMT
- Title: PENet: Towards Precise and Efficient Image Guided Depth Completion
- Authors: Mu Hu, Shuling Wang, Bin Li, Shiyu Ning, Li Fan, and Xiaojin Gong
- Abstract summary: How to fuse the color and depth modalities plays an important role in achieving good performance.
This paper proposes a two-branch backbone that consists of a color-dominant branch and a depth-dominant branch.
The proposed full model ranks 1st in the KITTI depth completion online leaderboard at the time of submission.
- Score: 11.162415111320625
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Image guided depth completion is the task of generating a dense depth map
from a sparse depth map and a high quality image. In this task, how to fuse the
color and depth modalities plays an important role in achieving good
performance. This paper proposes a two-branch backbone that consists of a
color-dominant branch and a depth-dominant branch to exploit and fuse two
modalities thoroughly. More specifically, one branch inputs a color image and a
sparse depth map to predict a dense depth map. The other branch takes as inputs
the sparse depth map and the previously predicted depth map, and outputs a
dense depth map as well. The depth maps predicted from two branches are
complimentary to each other and therefore they are adaptively fused. In
addition, we also propose a simple geometric convolutional layer to encode 3D
geometric cues. The geometric encoded backbone conducts the fusion of different
modalities at multiple stages, leading to good depth completion results. We
further implement a dilated and accelerated CSPN++ to refine the fused depth
map efficiently. The proposed full model ranks 1st in the KITTI depth
completion online leaderboard at the time of submission. It also infers much
faster than most of the top ranked methods. The code of this work will be
available at https://github.com/JUGGHM/PENet_ICRA2021.
Related papers
- GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - SemAttNet: Towards Attention-based Semantic Aware Guided Depth
Completion [12.724769241831396]
We propose a novel three-branch backbone comprising color-guided, semantic-guided, and depth-guided branches.
The predicted dense depth map of color-guided branch along-with semantic image and sparse depth map is passed as input to semantic-guided branch.
The depth-guided branch takes sparse, color, and semantic depths to generate the dense depth map.
arXiv Detail & Related papers (2022-04-28T16:53:25Z) - P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior [133.76192155312182]
We propose a method that learns to selectively leverage information from coplanar pixels to improve the predicted depth.
An extensive evaluation of our method shows that we set the new state of the art in supervised monocular depth estimation.
arXiv Detail & Related papers (2022-04-05T10:03:52Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - Confidence Guided Depth Completion Network [3.8998241153792454]
The paper proposes an image-guided depth completion method to estimate accurate dense depth maps with fast computation time.
Compared with the top-ranked models on the KITTI depth completion online leaderboard, the proposed model shows much faster computation time and competitive performance.
arXiv Detail & Related papers (2022-02-07T14:57:28Z) - BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation [60.34562823470874]
We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
arXiv Detail & Related papers (2021-07-27T01:28:23Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - Learning Joint 2D-3D Representations for Depth Completion [90.62843376586216]
We design a simple yet effective neural network block that learns to extract joint 2D and 3D features.
Specifically, the block consists of two domain-specific sub-networks that apply 2D convolution on image pixels and continuous convolution on 3D points.
arXiv Detail & Related papers (2020-12-22T22:58:29Z) - FCFR-Net: Feature Fusion based Coarse-to-Fine Residual Learning for
Monocular Depth Completion [15.01291779855834]
Recent approaches mainly formulate the depth completion as a one-stage end-to-end learning task.
We propose a novel end-to-end residual learning framework, which formulates the depth completion as a two-stage learning task.
arXiv Detail & Related papers (2020-12-15T13:09:56Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.