Least Square Estimation Network for Depth Completion
- URL: http://arxiv.org/abs/2203.03317v1
- Date: Mon, 7 Mar 2022 11:52:57 GMT
- Title: Least Square Estimation Network for Depth Completion
- Authors: Xianze Fang, Zexi Chen, Yunkai Wang, Yue Wang, Rong Xiong
- Abstract summary: In this paper, we propose an effective image representation method for depth completion tasks.
The input of our system is a monocular camera frame and the synchronous sparse depth map.
Experiments show that our results beat the state-of-the-art on NYU-Depth-V2 dataset both in accuracy and runtime.
- Score: 11.840223815711004
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Depth completion is a fundamental task in computer vision and robotics
research. Many previous works complete the dense depth map with neural networks
directly but most of them are non-interpretable and can not generalize to
different situations well. In this paper, we propose an effective image
representation method for depth completion tasks. The input of our system is a
monocular camera frame and the synchronous sparse depth map. The output of our
system is a dense per-pixel depth map of the frame. First we use a neural
network to transform each pixel into a feature vector, which we call base
functions. Then we pick out the known pixels' base functions and their depth
values. We use a linear least square algorithm to fit the base functions and
the depth values. Then we get the weights estimated from the least square
algorithm. Finally, we apply the weights to the whole image and predict the
final depth map. Our method is interpretable so it can generalize well.
Experiments show that our results beat the state-of-the-art on NYU-Depth-V2
dataset both in accuracy and runtime. Moreover, experiments show that our
method can generalize well on different numbers of sparse points and different
datasets.
Related papers
- Temporal Lidar Depth Completion [0.08192907805418582]
We show how a state-of-the-art method PENet can be modified to benefit from recurrency.
Our algorithm achieves state-of-the-art results on the KITTI depth completion dataset.
arXiv Detail & Related papers (2024-06-17T08:25:31Z) - GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs [49.55919802779889]
We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
arXiv Detail & Related papers (2022-10-19T17:56:03Z) - P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior [133.76192155312182]
We propose a method that learns to selectively leverage information from coplanar pixels to improve the predicted depth.
An extensive evaluation of our method shows that we set the new state of the art in supervised monocular depth estimation.
arXiv Detail & Related papers (2022-04-05T10:03:52Z) - Depth Completion using Plane-Residual Representation [84.63079529738924]
We introduce a novel way of interpreting depth information with the closest depth plane label $p$ and a residual value $r$, as we call it, Plane-Residual (PR) representation.
By interpreting depth information in PR representation and using our corresponding depth completion network, we were able to acquire improved depth completion performance with faster computation.
arXiv Detail & Related papers (2021-04-15T10:17:53Z) - Efficient Depth Completion Using Learned Bases [94.0808155168311]
We propose a new global geometry constraint for depth completion.
By assuming depth maps often lay on low dimensional subspaces, a dense depth map can be approximated by a weighted sum of full-resolution principal depth bases.
arXiv Detail & Related papers (2020-12-02T11:57:37Z) - Towards Dense People Detection with Deep Learning and Depth images [9.376814409561726]
This paper proposes a DNN-based system that detects multiple people from a single depth image.
Our neural network processes a depth image and outputs a likelihood map in image coordinates.
We show this strategy to be effective, producing networks that generalize to work with scenes different from those used during training.
arXiv Detail & Related papers (2020-07-14T16:43:02Z) - DPDnet: A Robust People Detector using Deep Learning with an Overhead
Depth Camera [9.376814409561726]
We propose a method that detects multiple people from a single overhead depth image with high reliability.
Our neural network, called DPDnet, is based on two fully-convolutional encoder-decoder neural blocks based on residual layers.
The experimental work shows that DPDNet outperforms state-of-the-art methods, with accuracies greater than 99% in three different publicly available datasets.
arXiv Detail & Related papers (2020-06-01T16:28:25Z) - Depth Completion Using a View-constrained Deep Prior [73.21559000917554]
Recent work has shown that the structure of convolutional neural networks (CNNs) induces a strong prior that favors natural images.
This prior, known as a deep image prior (DIP), is an effective regularizer in inverse problems such as image denoising and inpainting.
We extend the concept of the DIP to depth images. Given color images and noisy and incomplete target depth maps, we reconstruct a depth map restored by virtue of using the CNN network structure as a prior.
arXiv Detail & Related papers (2020-01-21T21:56:01Z) - Single Image Depth Estimation Trained via Depth from Defocus Cues [105.67073923825842]
Estimating depth from a single RGB image is a fundamental task in computer vision.
In this work, we rely, instead of different views, on depth from focus cues.
We present results that are on par with supervised methods on KITTI and Make3D datasets and outperform unsupervised learning approaches.
arXiv Detail & Related papers (2020-01-14T20:22:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.