GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds
- URL: http://arxiv.org/abs/2109.11835v1
- Date: Fri, 24 Sep 2021 09:26:53 GMT
- Title: GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds
- Authors: Min Zhang, Pranav Kadam, Shan Liu, C.-C. Jay Kuo
- Abstract summary: GSIP (Green of Indoor Point clouds) is an efficient solution to semantic segmentation of large-scale indoor scene point clouds.
GSIP has two novel components: 1) a room-style data pre-processing method that selects a proper subset of points for further processing, and 2) a new feature extractor which is extended from PointHop.
Experiments show that GSIP outperforms PointNet in segmentation performance for the S3DIS dataset.
- Score: 64.86292006892093
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An efficient solution to semantic segmentation of large-scale indoor scene
point clouds is proposed in this work. It is named GSIP (Green Segmentation of
Indoor Point clouds) and its performance is evaluated on a representative
large-scale benchmark -- the Stanford 3D Indoor Segmentation (S3DIS) dataset.
GSIP has two novel components: 1) a room-style data pre-processing method that
selects a proper subset of points for further processing, and 2) a new feature
extractor which is extended from PointHop. For the former, sampled points of
each room form an input unit. For the latter, the weaknesses of PointHop's
feature extraction when extending it to large-scale point clouds are identified
and fixed with a simpler processing pipeline. As compared with PointNet, which
is a pioneering deep-learning-based solution, GSIP is green since it has
significantly lower computational complexity and a much smaller model size.
Furthermore, experiments show that GSIP outperforms PointNet in segmentation
performance for the S3DIS dataset.
Related papers
- PointResNet: Residual Network for 3D Point Cloud Segmentation and
Classification [18.466814193413487]
Point cloud segmentation and classification are some of the primary tasks in 3D computer vision.
In this paper, we propose PointResNet, a residual block-based approach.
Our model directly processes the 3D points, using a deep neural network for the segmentation and classification tasks.
arXiv Detail & Related papers (2022-11-20T17:39:48Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - MVP-Net: Multiple View Pointwise Semantic Segmentation of Large-Scale
Point Clouds [13.260488842875649]
In this paper, we propose an end-to-end neural architecture, Multiple View Pointwise Net, MVP-Net, to efficiently infer large-scale outdoor point cloud without KNN or complex pre/postprocessing.
Numerical experiments show that the proposed MVP-Net is 11 times faster than the most efficient pointwise semantic segmentation method RandLA-Net.
arXiv Detail & Related papers (2022-01-30T09:43:00Z) - Instance Segmentation in 3D Scenes using Semantic Superpoint Tree
Networks [64.27814530457042]
We propose an end-to-end solution of Semantic Superpoint Tree Network (SSTNet) for proposing object instances from scene points.
Key in SSTNet is an intermediate, semantic superpoint tree (SST), which is constructed based on the learned semantic features of superpoints.
SSTNet ranks top on the ScanNet (V2) leaderboard, with 2% higher of mAP than the second best method.
arXiv Detail & Related papers (2021-08-17T07:25:14Z) - SCSS-Net: Superpoint Constrained Semi-supervised Segmentation Network
for 3D Indoor Scenes [6.3364439467281315]
We propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net.
Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence.
arXiv Detail & Related papers (2021-07-08T04:43:21Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z) - GSECnet: Ground Segmentation of Point Clouds for Edge Computing [7.481096704433562]
GSECnet is an efficient ground segmentation framework designed to be deployable on a low-power edge computing unit.
Our framework achieves the runtime inference of 135.2 Hz on a desktop platform.
arXiv Detail & Related papers (2021-04-05T04:29:28Z) - Investigate Indistinguishable Points in Semantic Segmentation of 3D
Point Cloud [34.414363402029984]
Indistinguishable points consist of those located in complex boundary, points with similar local textures but different categories, and points in isolate small hard areas.
We propose a novel Indistinguishable Area Focalization Network (IAF-Net), which selects indistinguishable points adaptively by utilizing the hierarchical semantic features.
Our IAF-Net achieves the comparable results with state-of-the-art performance on several popular 3D point cloud datasets.
arXiv Detail & Related papers (2021-03-18T15:54:59Z) - PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation [111.7241018610573]
We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
arXiv Detail & Related papers (2020-04-03T16:26:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.