SGTBN: Generating Dense Depth Maps from Single-Line LiDAR
- URL: http://arxiv.org/abs/2106.12994v1
- Date: Thu, 24 Jun 2021 13:08:35 GMT
- Title: SGTBN: Generating Dense Depth Maps from Single-Line LiDAR
- Authors: Hengjie Lu, Shugong Xu, Shan Cao
- Abstract summary: Current depth completion methods use extremely expensive 64-line LiDAR to obtain sparse depth maps.
Compared with the 64-line LiDAR, the single-line LiDAR is much less expensive and much more robust.
A single-line depth completion dataset is proposed based on the existing 64-line depth completion dataset.
- Score: 13.58227120045849
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth completion aims to generate a dense depth map from the sparse depth map
and aligned RGB image. However, current depth completion methods use extremely
expensive 64-line LiDAR(about $100,000) to obtain sparse depth maps, which will
limit their application scenarios. Compared with the 64-line LiDAR, the
single-line LiDAR is much less expensive and much more robust. Therefore, we
propose a method to tackle the problem of single-line depth completion, in
which we aim to generate a dense depth map from the single-line LiDAR info and
the aligned RGB image. A single-line depth completion dataset is proposed based
on the existing 64-line depth completion dataset(KITTI). A network called
Semantic Guided Two-Branch Network(SGTBN) which contains global and local
branches to extract and fuse global and local info is proposed for this task. A
Semantic guided depth upsampling module is used in our network to make full use
of the semantic info in RGB images. Except for the usual MSE loss, we add the
virtual normal loss to increase the constraint of high-order 3D geometry in our
network. Our network outperforms the state-of-the-art in the single-line depth
completion task. Besides, compared with the monocular depth estimation, our
method also has significant advantages in precision and model size.
Related papers
- SteeredMarigold: Steering Diffusion Towards Depth Completion of Largely Incomplete Depth Maps [3.399289369740637]
SteeredMarigold is a training-free, zero-shot depth completion method.
It produces metric dense depth even for largely incomplete depth maps.
Our code will be publicly available.
arXiv Detail & Related papers (2024-09-16T11:52:13Z) - RDFC-GAN: RGB-Depth Fusion CycleGAN for Indoor Depth Completion [28.634851863097953]
We propose a novel two-branch end-to-end fusion network named RDFC-GAN.
It takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
The first branch employs an encoder-decoder structure, by adhering to the Manhattan world assumption.
The other branch applies an RGB-depth fusion CycleGAN, adept at translating RGB imagery into detailed, textured depth maps.
arXiv Detail & Related papers (2023-06-06T11:03:05Z) - RGB-Depth Fusion GAN for Indoor Depth Completion [29.938869342958125]
In this paper, we design a novel two-branch end-to-end fusion network, which takes a pair of RGB and incomplete depth images as input to predict a dense and completed depth map.
In one branch, we propose an RGB-depth fusion GAN to transfer the RGB image to the fine-grained textured depth map.
In the other branch, we adopt adaptive fusion modules named W-AdaIN to propagate the features across the two branches.
arXiv Detail & Related papers (2022-03-21T10:26:38Z) - BridgeNet: A Joint Learning Network of Depth Map Super-Resolution and
Monocular Depth Estimation [60.34562823470874]
We propose a joint learning network of depth map super-resolution (DSR) and monocular depth estimation (MDE) without introducing additional supervision labels.
One is the high-frequency attention bridge (HABdg) designed for the feature encoding process, which learns the high-frequency information of the MDE task to guide the DSR task.
The other is the content guidance bridge (CGBdg) designed for the depth map reconstruction process, which provides the content guidance learned from DSR task for MDE task.
arXiv Detail & Related papers (2021-07-27T01:28:23Z) - DVMN: Dense Validity Mask Network for Depth Completion [0.0]
We develop a guided convolutional neural network focusing on gathering dense and valid information from sparse depth maps.
We evaluate our Dense Validity Mask Network (DVMN) on the KITTI depth completion benchmark and achieve state of the art results.
arXiv Detail & Related papers (2021-07-14T13:57:44Z) - Towards Fast and Accurate Real-World Depth Super-Resolution: Benchmark
Dataset and Baseline [48.69396457721544]
We build a large-scale dataset named "RGB-D-D" to promote the study of depth map super-resolution (SR)
We provide a fast depth map super-resolution (FDSR) baseline, in which the high-frequency component adaptively decomposed from RGB image to guide the depth map SR.
For the real-world LR depth maps, our algorithm can produce more accurate HR depth maps with clearer boundaries and to some extent correct the depth value errors.
arXiv Detail & Related papers (2021-04-13T13:27:26Z) - Sparse Auxiliary Networks for Unified Monocular Depth Prediction and
Completion [56.85837052421469]
Estimating scene geometry from data obtained with cost-effective sensors is key for robots and self-driving cars.
In this paper, we study the problem of predicting dense depth from a single RGB image with optional sparse measurements from low-cost active depth sensors.
We introduce Sparse Networks (SANs), a new module enabling monodepth networks to perform both the tasks of depth prediction and completion.
arXiv Detail & Related papers (2021-03-30T21:22:26Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - SelfDeco: Self-Supervised Monocular Depth Completion in Challenging
Indoor Environments [50.761917113239996]
We present a novel algorithm for self-supervised monocular depth completion.
Our approach is based on training a neural network that requires only sparse depth measurements and corresponding monocular video sequences without dense depth labels.
Our self-supervised algorithm is designed for challenging indoor environments with textureless regions, glossy and transparent surface, non-Lambertian surfaces, moving people, longer and diverse depth ranges and scenes captured by complex ego-motions.
arXiv Detail & Related papers (2020-11-10T08:55:07Z) - Accurate RGB-D Salient Object Detection via Collaborative Learning [101.82654054191443]
RGB-D saliency detection shows impressive ability on some challenge scenarios.
We propose a novel collaborative learning framework where edge, depth and saliency are leveraged in a more efficient way.
arXiv Detail & Related papers (2020-07-23T04:33:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.