Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise
Binarization
- URL: http://arxiv.org/abs/2207.11209v4
- Date: Thu, 23 Nov 2023 12:40:56 GMT
- Title: Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise
Binarization
- Authors: Weiguang Zhao, Yuyao Yan, Chaolong Yang, Jianan Ye, Xi Yang, Kaizhu
Huang
- Abstract summary: We propose a novel divide-and-conquer strategy named PBNet for segmenting point clouds.
Our binary clustering divides offset instance points into two categories: high and low density points.
PBNet ranks first on the ScanNetV2 official benchmark challenge, achieving the highest mAP.
- Score: 16.662238192665615
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Instance segmentation on point clouds is crucially important for 3D scene
understanding. Most SOTAs adopt distance clustering, which is typically
effective but does not perform well in segmenting adjacent objects with the
same semantic label (especially when they share neighboring points). Due to the
uneven distribution of offset points, these existing methods can hardly cluster
all instance points. To this end, we design a novel divide-and-conquer strategy
named PBNet that binarizes each point and clusters them separately to segment
instances. Our binary clustering divides offset instance points into two
categories: high and low density points (HPs vs. LPs). Adjacent objects can be
clearly separated by removing LPs, and then be completed and refined by
assigning LPs via a neighbor voting method. To suppress potential
over-segmentation, we propose to construct local scenes with the weight mask
for each instance. As a plug-in, the proposed binary clustering can replace
traditional distance clustering and lead to consistent performance gains on
many mainstream baselines. A series of experiments on ScanNetV2 and S3DIS
datasets indicate the superiority of our model. In particular, PBNet ranks
first on the ScanNetV2 official benchmark challenge, achieving the highest mAP.
Code will be available publicly at https://github.com/weiguangzhao/PBNet.
Related papers
- FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - ISBNet: a 3D Point Cloud Instance Segmentation Network with
Instance-aware Sampling and Box-aware Dynamic Convolution [14.88505076974645]
ISBNet is a novel method that represents instances as kernels and decodes instance masks via dynamic convolution.
We set new state-of-the-art results on ScanNetV2 (55.9), S3DIS (60.8), S3LS3D (49.2) in terms of AP and retains fast inference time (237ms per scene on ScanNetV2.
arXiv Detail & Related papers (2023-03-01T06:06:28Z) - MaskGroup: Hierarchical Point Grouping and Masking for 3D Instance
Segmentation [36.28586460186891]
This paper studies the 3D instance segmentation problem, which has a variety of real-world applications such as robotics and augmented reality.
We propose a novel framework to group and refine the 3D instances.
Our approach achieves a 66.4% mAP with the 0.5 IoU threshold on the ScanNetV2 test set, which is 1.9% higher than the state-of-the-art method.
arXiv Detail & Related papers (2022-03-28T11:22:58Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds [64.86292006892093]
GSIP (Green of Indoor Point clouds) is an efficient solution to semantic segmentation of large-scale indoor scene point clouds.
GSIP has two novel components: 1) a room-style data pre-processing method that selects a proper subset of points for further processing, and 2) a new feature extractor which is extended from PointHop.
Experiments show that GSIP outperforms PointNet in segmentation performance for the S3DIS dataset.
arXiv Detail & Related papers (2021-09-24T09:26:53Z) - Instance Segmentation in 3D Scenes using Semantic Superpoint Tree
Networks [64.27814530457042]
We propose an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points.
Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints.
SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method.
arXiv Detail & Related papers (2021-08-17T07:25:14Z) - Hierarchical Aggregation for 3D Instance Segmentation [41.20244892803604]
We propose a clustering-based framework named HAIS, which makes full use of spatial relation of points and point sets.
It ranks 1st on the ScanNet v2 benchmark, achieving the highest 69.9% AP50 and surpassing previous state-of-the-art (SOTA) methods by a large margin.
arXiv Detail & Related papers (2021-08-05T03:34:34Z) - FPCC: Fast Point Cloud Clustering for Instance Segmentation [4.007351600492542]
There has been little research on 3D point cloud instance segmentation of bin-picking scenes.
We propose a network (FPCC-Net) that infers feature centers of each instance and then clusters the remaining points.
It is shown that FPCC-Net improves average precision (AP) by about 40% and can process about 60,000 points in about 0.8 [s]
arXiv Detail & Related papers (2020-12-29T05:58:35Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation [111.7241018610573]
We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
arXiv Detail & Related papers (2020-04-03T16:26:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.