GRNet: Gridding Residual Network for Dense Point Cloud Completion
- URL: http://arxiv.org/abs/2006.03761v4
- Date: Mon, 20 Jul 2020 11:22:05 GMT
- Title: GRNet: Gridding Residual Network for Dense Point Cloud Completion
- Authors: Haozhe Xie, Hongxun Yao, Shangchen Zhou, Jiageng Mao, Shengping Zhang,
Wenxiu Sun
- Abstract summary: Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
- Score: 54.43648460932248
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Estimating the complete 3D point cloud from an incomplete one is a key
problem in many vision and robotics applications. Mainstream methods (e.g., PCN
and TopNet) use Multi-layer Perceptrons (MLPs) to directly process point
clouds, which may cause the loss of details because the structural and context
of point clouds are not fully considered. To solve this problem, we introduce
3D grids as intermediate representations to regularize unordered point clouds.
We therefore propose a novel Gridding Residual Network (GRNet) for point cloud
completion. In particular, we devise two novel differentiable layers, named
Gridding and Gridding Reverse, to convert between point clouds and 3D grids
without losing structural information. We also present the differentiable Cubic
Feature Sampling layer to extract features of neighboring points, which
preserves context information. In addition, we design a new loss function,
namely Gridding Loss, to calculate the L1 distance between the 3D grids of the
predicted and ground truth point clouds, which is helpful to recover details.
Experimental results indicate that the proposed GRNet performs favorably
against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI
benchmarks.
Related papers
- Point Cloud Semantic Segmentation using Multi Scale Sparse Convolution
Neural Network [0.0]
We propose a feature extraction module based on multi-scale ultra-sparse convolution and a feature selection module based on channel attention.
By introducing multi-scale sparse convolution, network could capture richer feature information based on convolution kernels of different sizes.
arXiv Detail & Related papers (2022-05-03T15:01:20Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - Deep Point Cloud Reconstruction [74.694733918351]
Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
arXiv Detail & Related papers (2021-11-23T07:53:28Z) - FatNet: A Feature-attentive Network for 3D Point Cloud Processing [1.502579291513768]
We introduce a novel feature-attentive neural network layer, a FAT layer, that combines both global point-based features and local edge-based features in order to generate better embeddings.
Our architecture achieves state-of-the-art results on the task of point cloud classification, as demonstrated on the ModelNet40 dataset.
arXiv Detail & Related papers (2021-04-07T23:13:56Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Point Cloud Completion by Skip-attention Network with Hierarchical
Folding [61.59710288271434]
We propose Skip-Attention Network (SA-Net) for 3D point cloud completion.
First, we propose a skip-attention mechanism to effectively exploit the local structure details of incomplete point clouds.
Second, in order to fully utilize the selected geometric information encoded by skip-attention mechanism at different resolutions, we propose a novel structure-preserving decoder.
arXiv Detail & Related papers (2020-05-08T06:23:51Z) - PF-Net: Point Fractal Network for 3D Point Cloud Completion [6.504317278066694]
Point Fractal Network (PF-Net) is a novel learning-based approach for precise and high-fidelity point cloud completion.
PF-Net preserves the spatial arrangements of the incomplete point cloud and can figure out the detailed geometrical structure of the missing region(s) in the prediction.
Our experiments demonstrate the effectiveness of our method for several challenging point cloud completion tasks.
arXiv Detail & Related papers (2020-03-01T05:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.