A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion
- URL: http://arxiv.org/abs/2112.03530v1
- Date: Tue, 7 Dec 2021 06:59:06 GMT
- Title: A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion
- Authors: Zhaoyang Lyu, Zhifeng Kong, Xudong Xu, Liang Pan, Dahua Lin
- Abstract summary: Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
- Score: 69.32451612060214
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: 3D point cloud is an important 3D representation for capturing real world 3D
objects. However, real-scanned 3D point clouds are often incomplete, and it is
important to recover complete point clouds for downstream applications. Most
existing point cloud completion methods use Chamfer Distance (CD) loss for
training. The CD loss estimates correspondences between two point clouds by
searching nearest neighbors, which does not capture the overall point density
distribution on the generated shape, and therefore likely leads to non-uniform
point cloud generation. To tackle this problem, we propose a novel Point
Diffusion-Refinement (PDR) paradigm for point cloud completion. PDR consists of
a Conditional Generation Network (CGNet) and a ReFinement Network (RFNet). The
CGNet uses a conditional generative model called the denoising diffusion
probabilistic model (DDPM) to generate a coarse completion conditioned on the
partial observation. DDPM establishes a one-to-one pointwise mapping between
the generated point cloud and the uniform ground truth, and then optimizes the
mean squared error loss to realize uniform generation. The RFNet refines the
coarse output of the CGNet and further improves quality of the completed point
cloud. Furthermore, we develop a novel dual-path architecture for both
networks. The architecture can (1) effectively and efficiently extract
multi-level features from partially observed point clouds to guide completion,
and (2) accurately manipulate spatial locations of 3D points to obtain smooth
surfaces and sharp details. Extensive experimental results on various benchmark
datasets show that our PDR paradigm outperforms previous state-of-the-art
methods for point cloud completion. Remarkably, with the help of the RFNet, we
can accelerate the iterative generation process of the DDPM by up to 50 times
without much performance drop.
Related papers
- Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - StarNet: Style-Aware 3D Point Cloud Generation [82.30389817015877]
StarNet is able to reconstruct and generate high-fidelity and even 3D point clouds using a mapping network.
Our framework achieves comparable state-of-the-art performance on various metrics in the point cloud reconstruction and generation tasks.
arXiv Detail & Related papers (2023-03-28T08:21:44Z) - Variational Relational Point Completion Network [41.98957577398084]
Existing point cloud completion methods generate global shape skeletons and lack fine local details.
This paper proposes Variational point Completion network (VRCNet) with two appealing properties.
VRCNet shows greatizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2021-04-20T17:53:40Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Pseudo-LiDAR Point Cloud Interpolation Based on 3D Motion Representation
and Spatial Supervision [68.35777836993212]
We propose a Pseudo-LiDAR point cloud network to generate temporally and spatially high-quality point cloud sequences.
By exploiting the scene flow between point clouds, the proposed network is able to learn a more accurate representation of the 3D spatial motion relationship.
arXiv Detail & Related papers (2020-06-20T03:11:04Z) - GRNet: Gridding Residual Network for Dense Point Cloud Completion [54.43648460932248]
Estimating the complete 3D point cloud from an incomplete one is a key problem in many vision and robotics applications.
We propose a novel Gridding Residual Network (GRNet) for point cloud completion.
Experimental results indicate that the proposed GRNet performs favorably against state-of-the-art methods on the ShapeNet, Completion3D, and KITTI benchmarks.
arXiv Detail & Related papers (2020-06-06T02:46:39Z) - PF-Net: Point Fractal Network for 3D Point Cloud Completion [6.504317278066694]
Point Fractal Network (PF-Net) is a novel learning-based approach for precise and high-fidelity point cloud completion.
PF-Net preserves the spatial arrangements of the incomplete point cloud and can figure out the detailed geometrical structure of the missing region(s) in the prediction.
Our experiments demonstrate the effectiveness of our method for several challenging point cloud completion tasks.
arXiv Detail & Related papers (2020-03-01T05:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.