PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation
- URL: http://arxiv.org/abs/2004.01658v1
- Date: Fri, 3 Apr 2020 16:26:37 GMT
- Title: PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation
- Authors: Li Jiang, Hengshuang Zhao, Shaoshuai Shi, Shu Liu, Chi-Wing Fu, Jiaya
Jia
- Abstract summary: We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
- Score: 111.7241018610573
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Instance segmentation is an important task for scene understanding. Compared
to the fully-developed 2D, 3D instance segmentation for point clouds have much
room to improve. In this paper, we present PointGroup, a new end-to-end
bottom-up architecture, specifically focused on better grouping the points by
exploring the void space between objects. We design a two-branch network to
extract point features and predict semantic labels and offsets, for shifting
each point towards its respective instance centroid. A clustering component is
followed to utilize both the original and offset-shifted point coordinate sets,
taking advantage of their complementary strength. Further, we formulate the
ScoreNet to evaluate the candidate instances, followed by the Non-Maximum
Suppression (NMS) to remove duplicates. We conduct extensive experiments on two
challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the
highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by
former best solutions in terms of mAP with IoU threshold 0.5.
Related papers
- Clustering based Point Cloud Representation Learning for 3D Analysis [80.88995099442374]
We propose a clustering based supervised learning scheme for point cloud analysis.
Unlike current de-facto, scene-wise training paradigm, our algorithm conducts within-class clustering on the point embedding space.
Our algorithm shows notable improvements on famous point cloud segmentation datasets.
arXiv Detail & Related papers (2023-07-27T03:42:12Z) - Divide and Conquer: 3D Point Cloud Instance Segmentation With Point-Wise
Binarization [16.662238192665615]
We propose a novel divide-and-conquer strategy named PBNet for segmenting point clouds.
Our binary clustering divides offset instance points into two categories: high and low density points.
PBNet ranks first on the ScanNetV2 official benchmark challenge, achieving the highest mAP.
arXiv Detail & Related papers (2022-07-22T17:19:00Z) - PointInst3D: Segmenting 3D Instances by Points [136.7261709896713]
We propose a fully-convolutional 3D point cloud instance segmentation method that works in a per-point prediction fashion.
We find the key to its success is assigning a suitable target to each sampled point.
Our approach achieves promising results on both ScanNet and S3DIS benchmarks.
arXiv Detail & Related papers (2022-04-25T02:41:46Z) - MaskGroup: Hierarchical Point Grouping and Masking for 3D Instance
Segmentation [36.28586460186891]
This paper studies the 3D instance segmentation problem, which has a variety of real-world applications such as robotics and augmented reality.
We propose a novel framework to group and refine the 3D instances.
Our approach achieves a 66.4% mAP with the 0.5 IoU threshold on the ScanNetV2 test set, which is 1.9% higher than the state-of-the-art method.
arXiv Detail & Related papers (2022-03-28T11:22:58Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - Two Heads are Better than One: Geometric-Latent Attention for Point
Cloud Classification and Segmentation [10.2254921311882]
We present an innovative two-headed attention layer that combines geometric and latent features to segment a 3D scene into meaningful subsets.
Each head combines local and global information, using either the geometric or latent features, of a neighborhood of points and uses this information to learn better local relationships.
arXiv Detail & Related papers (2021-10-30T11:20:56Z) - GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds [64.86292006892093]
GSIP (Green of Indoor Point clouds) is an efficient solution to semantic segmentation of large-scale indoor scene point clouds.
GSIP has two novel components: 1) a room-style data pre-processing method that selects a proper subset of points for further processing, and 2) a new feature extractor which is extended from PointHop.
Experiments show that GSIP outperforms PointNet in segmentation performance for the S3DIS dataset.
arXiv Detail & Related papers (2021-09-24T09:26:53Z) - Instance Segmentation in 3D Scenes using Semantic Superpoint Tree
Networks [64.27814530457042]
We propose an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points.
Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints.
SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method.
arXiv Detail & Related papers (2021-08-17T07:25:14Z) - Hierarchical Aggregation for 3D Instance Segmentation [41.20244892803604]
We propose a clustering-based framework named HAIS, which makes full use of spatial relation of points and point sets.
It ranks 1st on the ScanNet v2 benchmark, achieving the highest 69.9% AP50 and surpassing previous state-of-the-art (SOTA) methods by a large margin.
arXiv Detail & Related papers (2021-08-05T03:34:34Z) - FPCC: Fast Point Cloud Clustering for Instance Segmentation [4.007351600492542]
There has been little research on 3D point cloud instance segmentation of bin-picking scenes.
We propose a network (FPCC-Net) that infers feature centers of each instance and then clusters the remaining points.
It is shown that FPCC-Net improves average precision (AP) by about 40% and can process about 60,000 points in about 0.8 [s]
arXiv Detail & Related papers (2020-12-29T05:58:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.