GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs
- URL: http://arxiv.org/abs/2210.10758v1
- Date: Wed, 19 Oct 2022 17:56:03 GMT
- Title: GraphCSPN: Geometry-Aware Depth Completion via Dynamic GCNs
- Authors: Xin Liu, Xiaofei Shao, Bo Wang, Yali Li, Shengjin Wang
- Abstract summary: We propose a Graph Convolution based Spatial Propagation Network (GraphCSPN) as a general approach for depth completion.
In this work, we leverage convolution neural networks as well as graph neural networks in a complementary way for geometric representation learning.
Our method achieves the state-of-the-art performance, especially when compared in the case of using only a few propagation steps.
- Score: 49.55919802779889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image guided depth completion aims to recover per-pixel dense depth maps from
sparse depth measurements with the help of aligned color images, which has a
wide range of applications from robotics to autonomous driving. However, the 3D
nature of sparse-to-dense depth completion has not been fully explored by
previous methods. In this work, we propose a Graph Convolution based Spatial
Propagation Network (GraphCSPN) as a general approach for depth completion.
First, unlike previous methods, we leverage convolution neural networks as well
as graph neural networks in a complementary way for geometric representation
learning. In addition, the proposed networks explicitly incorporate learnable
geometric constraints to regularize the propagation process performed in
three-dimensional space rather than in two-dimensional plane. Furthermore, we
construct the graph utilizing sequences of feature patches, and update it
dynamically with an edge attention module during propagation, so as to better
capture both the local neighboring features and global relationships over long
distance. Extensive experiments on both indoor NYU-Depth-v2 and outdoor KITTI
datasets demonstrate that our method achieves the state-of-the-art performance,
especially when compared in the case of using only a few propagation steps.
Code and models are available at the project page.
Related papers
- Depth Completion using Geometry-Aware Embedding [22.333381291860498]
This paper proposes an efficient method to learn geometry-aware embedding.
It encodes the local and global geometric structure information from 3D points, e.g., scene layout, object's sizes and shapes, to guide dense depth estimation.
arXiv Detail & Related papers (2022-03-21T12:06:27Z) - 3DVNet: Multi-View Depth Prediction and Volumetric Refinement [68.68537312256144]
3DVNet is a novel multi-view stereo (MVS) depth-prediction method.
Our key idea is the use of a 3D scene-modeling network that iteratively updates a set of coarse depth predictions.
We show that our method exceeds state-of-the-art accuracy in both depth prediction and 3D reconstruction metrics.
arXiv Detail & Related papers (2021-12-01T00:52:42Z) - A Novel 3D-UNet Deep Learning Framework Based on High-Dimensional
Bilateral Grid for Edge Consistent Single Image Depth Estimation [0.45880283710344055]
Bilateral Grid based 3D convolutional neural network, dubbed as 3DBG-UNet, parameterizes high dimensional feature space by encoding compact 3D bilateral grids with UNets.
Another novel 3DBGES-UNet model is introduced that integrate 3DBG-UNet for inferring an accurate depth map given a single color view.
arXiv Detail & Related papers (2021-05-21T04:53:14Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Learning Joint 2D-3D Representations for Depth Completion [90.62843376586216]
We design a simple yet effective neural network block that learns to extract joint 2D and 3D features.
Specifically, the block consists of two domain-specific sub-networks that apply 2D convolution on image pixels and continuous convolution on 3D points.
arXiv Detail & Related papers (2020-12-22T22:58:29Z) - Learning a Geometric Representation for Data-Efficient Depth Estimation
via Gradient Field and Contrastive Loss [29.798579906253696]
We propose a gradient-based self-supervised learning algorithm with momentum contrastive loss to help ConvNets extract the geometric information with unlabeled images.
Our method outperforms the previous state-of-the-art self-supervised learning algorithms and shows the efficiency of labeled data in triple.
arXiv Detail & Related papers (2020-11-06T06:47:19Z) - Primal-Dual Mesh Convolutional Neural Networks [62.165239866312334]
We propose a primal-dual framework drawn from the graph-neural-network literature to triangle meshes.
Our method takes features for both edges and faces of a 3D mesh as input and dynamically aggregates them.
We provide theoretical insights of our approach using tools from the mesh-simplification literature.
arXiv Detail & Related papers (2020-10-23T14:49:02Z) - Adaptive Context-Aware Multi-Modal Network for Depth Completion [107.15344488719322]
We propose to adopt the graph propagation to capture the observed spatial contexts.
We then apply the attention mechanism on the propagation, which encourages the network to model the contextual information adaptively.
Finally, we introduce the symmetric gated fusion strategy to exploit the extracted multi-modal features effectively.
Our model, named Adaptive Context-Aware Multi-Modal Network (ACMNet), achieves the state-of-the-art performance on two benchmarks.
arXiv Detail & Related papers (2020-08-25T06:00:06Z) - Learning to Segment 3D Point Clouds in 2D Image Space [20.119802932358333]
We show how to efficiently project 3D point clouds into a 2D image space.
Traditional 2D convolutional neural networks (CNNs) such as U-Net can be applied for segmentation.
arXiv Detail & Related papers (2020-03-12T03:18:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.