PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation
- URL: http://arxiv.org/abs/2110.05839v1
- Date: Tue, 12 Oct 2021 09:02:24 GMT
- Title: PLNet: Plane and Line Priors for Unsupervised Indoor Depth Estimation
- Authors: Hualie Jiang, Laiyan Ding, Junjie Hu, Rui Huang
- Abstract summary: This paper proposes PLNet that leverages the plane and line priors to enhance the depth estimation.
Experiments on NYU Depth V2 and ScanNet show that PLNet outperforms existing methods.
- Score: 15.751045404065465
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Unsupervised learning of depth from indoor monocular videos is challenging as
the artificial environment contains many textureless regions. Fortunately, the
indoor scenes are full of specific structures, such as planes and lines, which
should help guide unsupervised depth learning. This paper proposes PLNet that
leverages the plane and line priors to enhance the depth estimation. We first
represent the scene geometry using local planar coefficients and impose the
smoothness constraint on the representation. Moreover, we enforce the planar
and linear consistency by randomly selecting some sets of points that are
probably coplanar or collinear to construct simple and effective consistency
losses. To verify the proposed method's effectiveness, we further propose to
evaluate the flatness and straightness of the predicted point cloud on the
reliable planar and linear regions. The regularity of these regions indicates
quality indoor reconstruction. Experiments on NYU Depth V2 and ScanNet show
that PLNet outperforms existing methods. The code is available at
\url{https://github.com/HalleyJiang/PLNet}.
Related papers
- P$^2$SDF for Neural Indoor Scene Reconstruction [29.355255923026597]
We propose a novel Pseudo Plane-regularized Signed Distance Field (P$2$SDF) for indoor scene reconstruction.
Experiments show that our P$2$SDF achieves competitive reconstruction performance in Manhattan scenes.
arXiv Detail & Related papers (2023-03-01T05:07:48Z) - Flattening-Net: Deep Regular 2D Representation for 3D Point Cloud
Analysis [66.49788145564004]
We present an unsupervised deep neural architecture called Flattening-Net to represent irregular 3D point clouds of arbitrary geometry and topology.
Our methods perform favorably against the current state-of-the-art competitors.
arXiv Detail & Related papers (2022-12-17T15:05:25Z) - Ground Plane Matters: Picking Up Ground Plane Prior in Monocular 3D
Object Detection [92.75961303269548]
The ground plane prior is a very informative geometry clue in monocular 3D object detection (M3OD)
We propose a Ground Plane Enhanced Network (GPENet) which resolves both issues at one go.
Our GPENet can outperform other methods and achieve state-of-the-art performance, well demonstrating the effectiveness and the superiority of the proposed approach.
arXiv Detail & Related papers (2022-11-03T02:21:35Z) - PlaneDepth: Self-supervised Depth Estimation via Orthogonal Planes [41.517947010531074]
Multiple near frontal-parallel planes based depth estimation demonstrated impressive results in self-supervised monocular depth estimation (MDE)
We propose the PlaneDepth, a novel planes based presentation, including vertical planes and ground planes.
Our method can extract the ground plane in an unsupervised manner, which is important for autonomous driving.
arXiv Detail & Related papers (2022-10-04T13:51:59Z) - DevNet: Self-supervised Monocular Depth Learning via Density Volume
Construction [51.96971077984869]
Self-supervised depth learning from monocular images normally relies on the 2D pixel-wise photometric relation between temporally adjacent image frames.
This work proposes Density Volume Construction Network (DevNet), a novel self-supervised monocular depth learning framework.
arXiv Detail & Related papers (2022-09-14T00:08:44Z) - Neural 3D Scene Reconstruction with the Manhattan-world Assumption [58.90559966227361]
This paper addresses the challenge of reconstructing 3D indoor scenes from multi-view images.
Planar constraints can be conveniently integrated into the recent implicit neural representation-based reconstruction methods.
The proposed method outperforms previous methods by a large margin on 3D reconstruction quality.
arXiv Detail & Related papers (2022-05-05T17:59:55Z) - P3Depth: Monocular Depth Estimation with a Piecewise Planarity Prior [133.76192155312182]
We propose a method that learns to selectively leverage information from coplanar pixels to improve the predicted depth.
An extensive evaluation of our method shows that we set the new state of the art in supervised monocular depth estimation.
arXiv Detail & Related papers (2022-04-05T10:03:52Z) - StructDepth: Leveraging the structural regularities for self-supervised
indoor depth estimation [7.028319464940422]
Self-supervised monocular depth estimation has achieved impressive performance on outdoor datasets.
But its performance degrades notably in indoor environments because of the lack of textures.
We leverage the structural regularities exhibited in indoor scenes, to train a better depth network.
arXiv Detail & Related papers (2021-08-19T09:26:13Z) - P$^{2}$Net: Patch-match and Plane-regularization for Unsupervised Indoor
Depth Estimation [37.95666188829359]
This paper tackles the unsupervised depth estimation task in indoor environments.
The paper argues that the poor performance suffers from the non-discriminative point-based matching.
Experiments on NYUv2 and ScanNet show that our P$2$Net outperforms existing approaches by a large margin.
arXiv Detail & Related papers (2020-07-15T14:10:43Z) - Occlusion-Aware Depth Estimation with Adaptive Normal Constraints [85.44842683936471]
We present a new learning-based method for multi-frame depth estimation from a color video.
Our method outperforms the state-of-the-art in terms of depth estimation accuracy.
arXiv Detail & Related papers (2020-04-02T07:10:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.