BPNet: B\'ezier Primitive Segmentation on 3D Point Clouds
- URL: http://arxiv.org/abs/2307.04013v2
- Date: Sun, 15 Oct 2023 08:08:01 GMT
- Title: BPNet: B\'ezier Primitive Segmentation on 3D Point Clouds
- Authors: Rao Fu, Cheng Wen, Qian Li, Xiao Xiao, Pierre Alliez
- Abstract summary: BPNet is a novel end-to-end deep learning framework to learn B'ezier primitive segmentation on 3D point clouds.
A joint optimization framework is proposed to learn B'ezier primitive segmentation and geometric fitting simultaneously.
Experiments show superior performance over previous work in terms of segmentation, with a substantially faster inference speed.
- Score: 17.133617027574353
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper proposes BPNet, a novel end-to-end deep learning framework to
learn B\'ezier primitive segmentation on 3D point clouds. The existing works
treat different primitive types separately, thus limiting them to finite shape
categories. To address this issue, we seek a generalized primitive segmentation
on point clouds. Taking inspiration from B\'ezier decomposition on NURBS
models, we transfer it to guide point cloud segmentation casting off primitive
types. A joint optimization framework is proposed to learn B\'ezier primitive
segmentation and geometric fitting simultaneously on a cascaded architecture.
Specifically, we introduce a soft voting regularizer to improve primitive
segmentation and propose an auto-weight embedding module to cluster point
features, making the network more robust and generic. We also introduce a
reconstruction module where we successfully process multiple CAD models with
different primitives simultaneously. We conducted extensive experiments on the
synthetic ABC dataset and real-scan datasets to validate and compare our
approach with different baseline methods. Experiments show superior performance
over previous work in terms of segmentation, with a substantially faster
inference speed.
Related papers
- Split-and-Fit: Learning B-Reps via Structure-Aware Voronoi Partitioning [50.684254969269546]
We introduce a novel method for acquiring boundary representations (B-Reps) of 3D CAD models.
We apply a spatial partitioning to derive a single primitive within each partition.
We show that our network, coined NVD-Net for neural Voronoi diagrams, can effectively learn Voronoi partitions for CAD models from training data.
arXiv Detail & Related papers (2024-06-07T21:07:49Z) - Rethinking Few-shot 3D Point Cloud Semantic Segmentation [62.80639841429669]
This paper revisits few-shot 3D point cloud semantic segmentation (FS-PCS)
We focus on two significant issues in the state-of-the-art: foreground leakage and sparse point distribution.
To address these issues, we introduce a standardized FS-PCS setting, upon which a new benchmark is built.
arXiv Detail & Related papers (2024-03-01T15:14:47Z) - Generalized Few-Shot Point Cloud Segmentation Via Geometric Words [54.32239996417363]
Few-shot point cloud segmentation algorithms learn to adapt to new classes at the sacrifice of segmentation accuracy for the base classes.
We present the first attempt at a more practical paradigm of generalized few-shot point cloud segmentation.
We propose the geometric words to represent geometric components shared between the base and novel classes, and incorporate them into a novel geometric-aware semantic representation.
arXiv Detail & Related papers (2023-09-20T11:24:33Z) - SurFit: Learning to Fit Surfaces Improves Few Shot Learning on Point
Clouds [48.61222927399794]
SurFit is a simple approach for label efficient learning of 3D shape segmentation networks.
It is based on a self-supervised task of decomposing the surface of a 3D shape into geometric primitives.
arXiv Detail & Related papers (2021-12-27T23:55:36Z) - LatticeNet: Fast Spatio-Temporal Point Cloud Segmentation Using
Permutohedral Lattices [27.048998326468688]
Deep convolutional neural networks (CNNs) have shown outstanding performance in the task of semantically segmenting images.
Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input.
We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-08-09T10:17:27Z) - Learn to Learn Metric Space for Few-Shot Segmentation of 3D Shapes [17.217954254022573]
We introduce a meta-learning-based method for few-shot 3D shape segmentation where only a few labeled samples are provided for the unseen classes.
We demonstrate the superior performance of our proposed on the ShapeNet part dataset under the few-shot scenario, compared with well-established baseline and state-of-the-art semi-supervised methods.
arXiv Detail & Related papers (2021-07-07T01:47:00Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - HPNet: Deep Primitive Segmentation Using Hybrid Representations [51.56523135057311]
HPNet is a novel deep-learning approach for segmenting a 3D shape represented as a point cloud into primitive patches.
Unlike utilizing a single feature representation, HPNet hybrid representations that combine one learned semantic descriptor, two spectral descriptors derived from predicted parameters, as well as an adjacency matrix that encodes sharp edges.
arXiv Detail & Related papers (2021-05-22T02:12:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.