HPNet: Deep Primitive Segmentation Using Hybrid Representations
- URL: http://arxiv.org/abs/2105.10620v1
- Date: Sat, 22 May 2021 02:12:46 GMT
- Title: HPNet: Deep Primitive Segmentation Using Hybrid Representations
- Authors: Siming Yan, Zhenpei Yang, Chongyang Ma, Haibin Huang, Etienne Vouga,
Qixing Huang
- Abstract summary: HPNet is a novel deep-learning approach for segmenting a 3D shape represented as a point cloud into primitive patches.
Unlike utilizing a single feature representation, HPNet hybrid representations that combine one learned semantic descriptor, two spectral descriptors derived from predicted parameters, as well as an adjacency matrix that encodes sharp edges.
- Score: 51.56523135057311
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper introduces HPNet, a novel deep-learning approach for segmenting a
3D shape represented as a point cloud into primitive patches. The key to deep
primitive segmentation is learning a feature representation that can separate
points of different primitives. Unlike utilizing a single feature
representation, HPNet leverages hybrid representations that combine one learned
semantic descriptor, two spectral descriptors derived from predicted geometric
parameters, as well as an adjacency matrix that encodes sharp edges. Moreover,
instead of merely concatenating the descriptors, HPNet optimally combines
hybrid representations by learning combination weights. This weighting module
builds on the entropy of input features. The output primitive segmentation is
obtained from a mean-shift clustering module. Experimental results on benchmark
datasets ANSI and ABCParts show that HPNet leads to significant performance
gains from baseline approaches.
Related papers
- BPNet: B\'ezier Primitive Segmentation on 3D Point Clouds [17.133617027574353]
BPNet is a novel end-to-end deep learning framework to learn B'ezier primitive segmentation on 3D point clouds.
A joint optimization framework is proposed to learn B'ezier primitive segmentation and geometric fitting simultaneously.
Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed.
arXiv Detail & Related papers (2023-07-08T16:46:01Z) - Fitting and recognition of geometric primitives in segmented 3D point
clouds using a localized voting procedure [1.8352113484137629]
We introduce a novel technique for processing point clouds that, through a voting procedure, is able to provide an initial estimate of the primitive parameters each type.
By using these estimates we localize the search of the optimal solution in a dimensionally-reduced space, making it efficient to extend the HT to more primitive than those that generally found in the literature.
arXiv Detail & Related papers (2022-05-30T20:47:43Z) - Two-Stream Graph Convolutional Network for Intra-oral Scanner Image
Segmentation [133.02190910009384]
We propose a two-stream graph convolutional network (i.e., TSGCN) to handle inter-view confusion between different raw attributes.
Our TSGCN significantly outperforms state-of-the-art methods in 3D tooth (surface) segmentation.
arXiv Detail & Related papers (2022-04-19T10:41:09Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - Learn to Learn Metric Space for Few-Shot Segmentation of 3D Shapes [17.217954254022573]
We introduce a meta-learning-based method for few-shot 3D shape segmentation where only a few labeled samples are provided for the unseen classes.
We demonstrate the superior performance of our proposed on the ShapeNet part dataset under the few-shot scenario, compared with well-established baseline and state-of-the-art semi-supervised methods.
arXiv Detail & Related papers (2021-07-07T01:47:00Z) - DyCo3D: Robust Instance Segmentation of 3D Point Clouds through Dynamic
Convolution [136.7261709896713]
We propose a data-driven approach that generates the appropriate convolution kernels to apply in response to the nature of the instances.
The proposed method achieves promising results on both ScanetNetV2 and S3DIS.
It also improves inference speed by more than 25% over the current state-of-the-art.
arXiv Detail & Related papers (2020-11-26T14:56:57Z) - Class-wise Dynamic Graph Convolution for Semantic Segmentation [63.08061813253613]
We propose a class-wise dynamic graph convolution (CDGC) module to adaptively propagate information.
We also introduce the Class-wise Dynamic Graph Convolution Network(CDGCNet), which consists of two main parts including the CDGC module and a basic segmentation network.
We conduct extensive experiments on three popular semantic segmentation benchmarks including Cityscapes, PASCAL VOC 2012 and COCO Stuff.
arXiv Detail & Related papers (2020-07-19T15:26:50Z) - PointGMM: a Neural GMM Network for Point Clouds [83.9404865744028]
Point clouds are popular representation for 3D shapes, but encode a particular sampling without accounting for shape priors or non-local information.
We present PointGMM, a neural network that learns to generate hGMMs which are characteristic of the shape class.
We show that as a generative model, PointGMM learns a meaningful latent space which enables generating consistents between existing shapes.
arXiv Detail & Related papers (2020-03-30T10:34:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.