Point Set Voting for Partial Point Cloud Analysis
- URL: http://arxiv.org/abs/2007.04537v2
- Date: Sat, 2 Jan 2021 17:37:19 GMT
- Title: Point Set Voting for Partial Point Cloud Analysis
- Authors: Junming Zhang, Weijia Chen, Yuping Wang, Ram Vasudevan, Matthew
Johnson-Roberson
- Abstract summary: techniques for point cloud classification and segmentation have in recent years achieved incredible performance driven in part by leveraging large synthetic datasets.
This paper proposes a general model for partial point clouds analysis wherein the latent feature encoding a complete point clouds is inferred by applying a local point set voting strategy.
- Score: 26.31029112502835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The continual improvement of 3D sensors has driven the development of
algorithms to perform point cloud analysis. In fact, techniques for point cloud
classification and segmentation have in recent years achieved incredible
performance driven in part by leveraging large synthetic datasets.
Unfortunately these same state-of-the-art approaches perform poorly when
applied to incomplete point clouds. This limitation of existing algorithms is
particularly concerning since point clouds generated by 3D sensors in the real
world are usually incomplete due to perspective view or occlusion by other
objects. This paper proposes a general model for partial point clouds analysis
wherein the latent feature encoding a complete point clouds is inferred by
applying a local point set voting strategy. In particular, each local point set
constructs a vote that corresponds to a distribution in the latent space, and
the optimal latent feature is the one with the highest probability. This
approach ensures that any subsequent point cloud analysis is robust to partial
observation while simultaneously guaranteeing that the proposed model is able
to output multiple possible results. This paper illustrates that this proposed
method achieves state-of-the-art performance on shape classification, part
segmentation and point cloud completion.
Related papers
- Zero-shot Point Cloud Completion Via 2D Priors [52.72867922938023]
3D point cloud completion is designed to recover complete shapes from partially observed point clouds.
We propose a zero-shot framework aimed at completing partially observed point clouds across any unseen categories.
arXiv Detail & Related papers (2024-04-10T08:02:17Z) - PIVOT-Net: Heterogeneous Point-Voxel-Tree-based Framework for Point
Cloud Compression [8.778300313732027]
We propose a heterogeneous point cloud compression (PCC) framework.
We unify typical point cloud representations -- point-based, voxel-based, and tree-based representations -- and their associated backbones.
We augment the framework with a proposed context-aware upsampling for decoding and an enhanced voxel transformer for feature aggregation.
arXiv Detail & Related papers (2024-02-11T16:57:08Z) - Point Cloud Pre-training with Diffusion Models [62.12279263217138]
We propose a novel pre-training method called Point cloud Diffusion pre-training (PointDif)
PointDif achieves substantial improvement across various real-world datasets for diverse downstream tasks such as classification, segmentation and detection.
arXiv Detail & Related papers (2023-11-25T08:10:05Z) - Variational Relational Point Completion Network for Robust 3D
Classification [59.80993960827833]
Vari point cloud completion methods tend to generate global shape skeletons hence lack fine local details.
This paper proposes a variational framework, point Completion Network (VRCNet) with two appealing properties.
VRCNet shows great generalizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2023-04-18T17:03:20Z) - HybridFusion: LiDAR and Vision Cross-Source Point Cloud Fusion [15.94976936555104]
We propose a cross-source point cloud fusion algorithm called HybridFusion.
It can register cross-source dense point clouds from different viewing angle in outdoor large scenes.
The proposed approach is evaluated comprehensively through qualitative and quantitative experiments.
arXiv Detail & Related papers (2023-04-10T10:54:54Z) - PointAttN: You Only Need Attention for Point Cloud Completion [89.88766317412052]
Point cloud completion refers to completing 3D shapes from partial 3D point clouds.
We propose a novel neural network for processing point cloud in a per-point manner to eliminate kNNs.
The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes.
arXiv Detail & Related papers (2022-03-16T09:20:01Z) - Variational Relational Point Completion Network [41.98957577398084]
Existing point cloud completion methods generate global shape skeletons and lack fine local details.
This paper proposes Variational point Completion network (VRCNet) with two appealing properties.
VRCNet shows greatizability and robustness on real-world point cloud scans.
arXiv Detail & Related papers (2021-04-20T17:53:40Z) - 3D Object Classification on Partial Point Clouds: A Practical
Perspective [91.81377258830703]
A point cloud is a popular shape representation adopted in 3D object classification.
This paper introduces a practical setting to classify partial point clouds of object instances under any poses.
A novel algorithm in an alignment-classification manner is proposed in this paper.
arXiv Detail & Related papers (2020-12-18T04:00:56Z) - Point Cloud Completion by Skip-attention Network with Hierarchical
Folding [61.59710288271434]
We propose Skip-Attention Network (SA-Net) for 3D point cloud completion.
First, we propose a skip-attention mechanism to effectively exploit the local structure details of incomplete point clouds.
Second, in order to fully utilize the selected geometric information encoded by skip-attention mechanism at different resolutions, we propose a novel structure-preserving decoder.
arXiv Detail & Related papers (2020-05-08T06:23:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.