FPCC: Fast Point Cloud Clustering for Instance Segmentation
- URL: http://arxiv.org/abs/2012.14618v3
- Date: Thu, 4 Mar 2021 10:01:32 GMT
- Title: FPCC: Fast Point Cloud Clustering for Instance Segmentation
- Authors: Yajun Xu, Shogo Arai, Diyi Liu, Fangzhou Lin, Kazuhiro Kosuge
- Abstract summary: There has been little research on 3D point cloud instance segmentation of bin-picking scenes.
We propose a network (FPCC-Net) that infers feature centers of each instance and then clusters the remaining points.
It is shown that FPCC-Net improves average precision (AP) by about 40% and can process about 60,000 points in about 0.8 [s]
- Score: 4.007351600492542
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Instance segmentation is an important pre-processing task in numerous
real-world applications, such as robotics, autonomous vehicles, and
human-computer interaction. However, there has been little research on 3D point
cloud instance segmentation of bin-picking scenes in which multiple objects of
the same class are stacked together. Compared with the rapid development of
deep learning for two-dimensional (2D) image tasks, deep learning-based 3D
point cloud segmentation still has a lot of room for development. In such a
situation, distinguishing a large number of occluded objects of the same class
is a highly challenging problem. In a usual bin-picking scene, an object model
is known and the number of object type is one. Thus, the semantic information
can be ignored; instead, the focus is put on the segmentation of instances.
Based on this task requirement, we propose a network (FPCC-Net) that infers
feature centers of each instance and then clusters the remaining points to the
closest feature center in feature embedding space. FPCC-Net includes two
subnets, one for inferring the feature centers for clustering and the other for
describing features of each point. The proposed method is compared with
existing 3D point cloud and 2D segmentation methods in some bin-picking scenes.
It is shown that FPCC-Net improves average precision (AP) by about 40\% than
SGPN and can process about 60,000 points in about 0.8 [s].
Related papers
- Open3DIS: Open-Vocabulary 3D Instance Segmentation with 2D Mask Guidance [49.14140194332482]
We introduce Open3DIS, a novel solution designed to tackle the problem of Open-Vocabulary Instance within 3D scenes.
Objects within 3D environments exhibit diverse shapes, scales, and colors, making precise instance-level identification a challenging task.
arXiv Detail & Related papers (2023-12-17T10:07:03Z) - Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Prototype Adaption and Projection for Few- and Zero-shot 3D Point Cloud
Semantic Segmentation [30.18333233940194]
We address the challenging task of few-shot and zero-shot 3D point cloud semantic segmentation.
Our proposed method surpasses state-of-the-art algorithms by a considerable 7.90% and 14.82% under the 2-way 1-shot setting on S3DIS and ScanNet benchmarks, respectively.
arXiv Detail & Related papers (2023-05-23T17:58:05Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise
Binarization [16.662238192665615]
We propose a novel divide-and-conquer strategy named PBNet for segmenting point clouds.
Our binary clustering divides offset instance points into two categories: high and low density points.
PBNet ranks first on the ScanNetV2 official benchmark challenge, achieving the highest mAP.
arXiv Detail & Related papers (2022-07-22T17:19:00Z) - Instance Segmentation in 3D Scenes using Semantic Superpoint Tree
Networks [64.27814530457042]
We propose an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points.
Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints.
SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method.
arXiv Detail & Related papers (2021-08-17T07:25:14Z) - LRGNet: Learnable Region Growing for Class-Agnostic Point Cloud
Segmentation [19.915593390338337]
This research proposes a learnable region growing method for class-agnostic point cloud segmentation.
The proposed method is able to segment any class of objects using a single deep neural network without any assumptions about their shapes and sizes.
arXiv Detail & Related papers (2021-03-16T15:58:01Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation [111.7241018610573]
We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
arXiv Detail & Related papers (2020-04-03T16:26:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.