Variational Relational Point Completion Network for Robust 3D
Classification
- URL: http://arxiv.org/abs/2304.09131v1
- Date: Tue, 18 Apr 2023 17:03:20 GMT
- Title: Variational Relational Point Completion Network for Robust 3D
Classification
- Authors: Liang Pan, Xinyi Chen, Zhongang Cai, Junzhe Zhang, Haiyu Zhao, Shuai
Yi, Ziwei Liu
- Abstract summary: Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
- Score: 59.80993960827833
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Real-scanned point clouds are often incomplete due to viewpoint, occlusion,
and noise, which hampers 3D geometric modeling and perception. Existing point
cloud completion methods tend to generate global shape skeletons and hence lack
fine local details. Furthermore, they mostly learn a deterministic
partial-to-complete mapping, but overlook structural relations in man-made
objects. To tackle these challenges, this paper proposes a variational
framework, Variational Relational point Completion Network (VRCNet) with two
appealing properties: 1) Probabilistic Modeling. In particular, we propose a
dual-path architecture to enable principled probabilistic modeling across
partial and complete clouds. One path consumes complete point clouds for
reconstruction by learning a point VAE. The other path generates complete
shapes for partial point clouds, whose embedded distribution is guided by
distribution obtained from the reconstruction path during training. 2)
Relational Enhancement. Specifically, we carefully design point self-attention
kernel and point selective kernel module to exploit relational point features,
which refines local shape details conditioned on the coarse completion. In
addition, we contribute multi-view partial point cloud datasets (MVP and MVP-40
dataset) containing over 200,000 high-quality scans, which render partial 3D
shapes from 26 uniformly distributed camera poses for each 3D CAD model.
Extensive experiments demonstrate that VRCNet outperforms state-of-the-art
methods on all standard point cloud completion benchmarks. Notably, VRCNet
shows great generalizability and robustness on real-world point cloud scans.
Moreover, we can achieve robust 3D classification for partial point clouds with
the help of VRCNet, which can highly increase classification accuracy.
Related papers
- Self-supervised 3D Point Cloud Completion via Multi-view Adversarial Learning [61.14132533712537]
We propose MAL-SPC, a framework that effectively leverages both object-level and category-specific geometric similarities to complete missing structures.
Our MAL-SPC does not require any 3D complete supervision and only necessitates a single partial point cloud for each object.
arXiv Detail & Related papers (2024-07-13T06:53:39Z) - SVDFormer: Complementing Point Cloud via Self-view Augmentation and
Self-structure Dual-generator [30.483163963846206]
We propose a novel network, SVDFormer, to tackle two specific challenges in point cloud completion.
We first design a Self-view Fusion Network that leverages multiple-view depth image information to observe incomplete self-shape.
We then introduce a refinement module, called Self-structure Dual-generator, in which we incorporate learned shape priors and geometric self-similarities for producing new points.
arXiv Detail & Related papers (2023-07-17T13:55:31Z) - Point cloud completion on structured feature map with feedback network [28.710494879042002]
We propose FSNet, a feature structuring module that can adaptively aggregate point-wise features into a 2D structured feature map.
A 2D convolutional neural network is adopted to decode feature maps from FSNet into a coarse and complete point cloud.
A point cloud upsampling network is used to generate dense point cloud from the partial input and the coarse intermediate output.
arXiv Detail & Related papers (2022-02-17T10:59:40Z) - A Conditional Point Diffusion-Refinement Paradigm for 3D Point Cloud
Completion [69.32451612060214]
Real-scanned 3D point clouds are often incomplete, and it is important to recover complete point clouds for downstream applications.
Most existing point cloud completion methods use Chamfer Distance (CD) loss for training.
We propose a novel Point Diffusion-Refinement (PDR) paradigm for point cloud completion.
arXiv Detail & Related papers (2021-12-07T06:59:06Z) - CarveNet: Carving Point-Block for Complex 3D Shape Completion [27.65423395944538]
3D point cloud completion heavily relies on the accurate understanding of the complex 3D shapes.
We propose a new network architecture, i.e., CarveNet, to complete complex 3D point clouds.
On datasets, CarveNet successfully outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2021-07-28T16:07:20Z) - Variational Relational Point Completion Network [41.98957577398084]
Existing point cloud completion methods generate global shape skeletons and lack fine local details.
This paper proposes Variational point Completion network (VRCNet) with two appealing properties.
VRCNet shows greatizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2021-04-20T17:53:40Z) - ParaNet: Deep Regular Representation for 3D Point Clouds [62.81379889095186]
ParaNet is a novel end-to-end deep learning framework for representing 3D point clouds.
It converts an irregular 3D point cloud into a regular 2D color image, named point geometry image (PGI)
In contrast to conventional regular representation modalities based on multi-view projection and voxelization, the proposed representation is differentiable and reversible.
arXiv Detail & Related papers (2020-12-05T13:19:55Z) - Cascaded Refinement Network for Point Cloud Completion [74.80746431691938]
We propose a cascaded refinement network together with a coarse-to-fine strategy to synthesize the detailed object shapes.
Considering the local details of partial input with the global shape information together, we can preserve the existing details in the incomplete point set.
We also design a patch discriminator that guarantees every local area has the same pattern with the ground truth to learn the complicated point distribution.
arXiv Detail & Related papers (2020-04-07T13:03:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.