Deep Point Cloud Reconstruction
- URL: http://arxiv.org/abs/2111.11704v1
- Date: Tue, 23 Nov 2021 07:53:28 GMT
- Title: Deep Point Cloud Reconstruction
- Authors: Jaesung Choe, Byeongin Joung, Francois Rameau, Jaesik Park, In So
Kweon
- Abstract summary: Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to densify, denoise, and complete inaccurate point cloud.
We propose a deep point cloud reconstruction network consisting of two stages: 1) a 3D sparse stacked-hourglass network as for the initial densification and denoising, 2) a refinement via transformers converting the discrete voxels into 3D points.
- Score: 74.694733918351
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Point cloud obtained from 3D scanning is often sparse, noisy, and irregular.
To cope with these issues, recent studies have been separately conducted to
densify, denoise, and complete inaccurate point cloud. In this paper, we
advocate that jointly solving these tasks leads to significant improvement for
point cloud reconstruction. To this end, we propose a deep point cloud
reconstruction network consisting of two stages: 1) a 3D sparse
stacked-hourglass network as for the initial densification and denoising, 2) a
refinement via transformers converting the discrete voxels into 3D points. In
particular, we further improve the performance of transformer by a newly
proposed module called amplified positional encoding. This module has been
designed to differently amplify the magnitude of positional encoding vectors
based on the points' distances for adaptive refinements. Extensive experiments
demonstrate that our network achieves state-of-the-art performance among the
recent studies in the ScanNet, ICL-NUIM, and ShapeNetPart datasets. Moreover,
we underline the ability of our network to generalize toward real-world and
unmet scenes.
Related papers
- DeCoTR: Enhancing Depth Completion with 2D and 3D Attentions [41.55908366474901]
We introduce a novel approach that harnesses both 2D and 3D attentions to enable highly accurate depth completion.
We evaluate our method, DeCoTR, on established depth completion benchmarks.
arXiv Detail & Related papers (2024-03-18T19:22:55Z) - Arbitrary point cloud upsampling via Dual Back-Projection Network [12.344557879284219]
We propose a Dual Back-Projection network for point cloud upsampling (DBPnet)
A Dual Back-Projection is formulated in an up-down-up manner for point cloud upsampling.
Experimental results show that the proposed method achieves the lowest point set matching losses.
arXiv Detail & Related papers (2023-07-18T06:11:09Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - DV-ConvNet: Fully Convolutional Deep Learning on Point Clouds with
Dynamic Voxelization and 3D Group Convolution [0.7340017786387767]
3D point cloud interpretation is a challenging task due to the randomness and sparsity of the component points.
In this work, we draw attention back to the standard 3D convolutions towards an efficient 3D point cloud interpretation.
Our network is able to run and converge at a considerably fast speed, while yields on-par or even better performance compared with the state-of-the-art methods on several benchmark datasets.
arXiv Detail & Related papers (2020-09-07T07:45:05Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.