Learning Fine-Grained Segmentation of 3D Shapes without Part Labels
- URL: http://arxiv.org/abs/2103.13030v1
- Date: Wed, 24 Mar 2021 07:27:07 GMT
- Title: Learning Fine-Grained Segmentation of 3D Shapes without Part Labels
- Authors: Xiaogang Wang, Xun Sun, Xinyu Cao, Kai Xu, Bin Zhou
- Abstract summary: Learning-based 3D shape segmentation is usually formulated as a semantic labeling problem, assuming that all parts of training shapes are annotated with a given set of tags.
We propose deep clustering to learn part priors from a shape dataset with fine-grained segmentation but no part labels.
Our method is evaluated with a challenging benchmark of fine-grained segmentation, showing state-of-the-art performance.
- Score: 29.837938445219528
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning-based 3D shape segmentation is usually formulated as a semantic
labeling problem, assuming that all parts of training shapes are annotated with
a given set of tags. This assumption, however, is impractical for learning
fine-grained segmentation. Although most off-the-shelf CAD models are, by
construction, composed of fine-grained parts, they usually miss semantic tags
and labeling those fine-grained parts is extremely tedious. We approach the
problem with deep clustering, where the key idea is to learn part priors from a
shape dataset with fine-grained segmentation but no part labels. Given point
sampled 3D shapes, we model the clustering priors of points with a similarity
matrix and achieve part segmentation through minimizing a novel low rank loss.
To handle highly densely sampled point sets, we adopt a divide-and-conquer
strategy. We partition the large point set into a number of blocks. Each block
is segmented using a deep-clustering-based part prior network trained in a
category-agnostic manner. We then train a graph convolution network to merge
the segments of all blocks to form the final segmentation result. Our method is
evaluated with a challenging benchmark of fine-grained segmentation, showing
state-of-the-art performance.
Related papers
- Bayesian Self-Training for Semi-Supervised 3D Segmentation [59.544558398992386]
3D segmentation is a core problem in computer vision.
densely labeling 3D point clouds to employ fully-supervised training remains too labor intensive and expensive.
Semi-supervised training provides a more practical alternative, where only a small set of labeled data is given, accompanied by a larger unlabeled set.
arXiv Detail & Related papers (2024-09-12T14:54:31Z) - Instance Consistency Regularization for Semi-Supervised 3D Instance Segmentation [50.51125319374404]
We propose a novel self-training network InsTeacher3D to explore and exploit pure instance knowledge from unlabeled data.
Experimental results on multiple large-scale datasets show that the InsTeacher3D significantly outperforms prior state-of-the-art semi-supervised approaches.
arXiv Detail & Related papers (2024-06-24T16:35:58Z) - Generalized Few-Shot Point Cloud Segmentation Via Geometric Words [54.32239996417363]
Few-shot point cloud segmentation algorithms learn to adapt to new classes at the sacrifice of segmentation accuracy for the base classes.
We present the first attempt at a more practical paradigm of generalized few-shot point cloud segmentation.
We propose the geometric words to represent geometric components shared between the base and novel classes, and incorporate them into a novel geometric-aware semantic representation.
arXiv Detail & Related papers (2023-09-20T11:24:33Z) - Semi-supervised 3D shape segmentation with multilevel consistency and
part substitution [21.075426681857024]
We propose an effective semi-supervised method for learning 3D segmentations from a few labeled 3D shapes and a large amount of unlabeled 3D data.
For the unlabeled data, we present a novel multilevel consistency loss to enforce consistency of network predictions between perturbed copies of a 3D shape.
For the labeled data, we develop a simple yet effective part substitution scheme to augment the labeled 3D shapes with more structural variations to enhance training.
arXiv Detail & Related papers (2022-04-19T11:48:24Z) - iSeg3D: An Interactive 3D Shape Segmentation Tool [48.784624011210475]
We propose an effective annotation tool, named iSeg for 3D shape.
Under our observation, most objects can be considered as the composition of finite primitive shapes.
We train iSeg3D model on our built primitive-composed shape data to learn the geometric prior knowledge in a self-supervised manner.
arXiv Detail & Related papers (2021-12-24T08:15:52Z) - 3D Compositional Zero-shot Learning with DeCompositional Consensus [102.7571947144639]
We argue that part knowledge should be composable beyond the observed object classes.
We present 3D Compositional Zero-shot Learning as a problem of part generalization from seen to unseen object classes.
arXiv Detail & Related papers (2021-11-29T16:34:53Z) - Robust 3D Scene Segmentation through Hierarchical and Learnable
Part-Fusion [9.275156524109438]
3D semantic segmentation is a fundamental building block for several scene understanding applications such as autonomous driving, robotics and AR/VR.
Previous methods have utilized hierarchical, iterative methods to fuse semantic and instance information, but they lack learnability in context fusion.
This paper presents Segment-Fusion, a novel attention-based method for hierarchical fusion of semantic and instance information.
arXiv Detail & Related papers (2021-11-16T13:14:47Z) - SCSS-Net: Superpoint Constrained Semi-supervised Segmentation Network
for 3D Indoor Scenes [6.3364439467281315]
We propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net.
Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence.
arXiv Detail & Related papers (2021-07-08T04:43:21Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.