Investigate Indistinguishable Points in Semantic Segmentation of 3D
Point Cloud
- URL: http://arxiv.org/abs/2103.10339v1
- Date: Thu, 18 Mar 2021 15:54:59 GMT
- Title: Investigate Indistinguishable Points in Semantic Segmentation of 3D
Point Cloud
- Authors: Mingye Xu, Zhipeng Zhou, Junhao Zhang, Yu Qiao
- Abstract summary: Indistinguishable points consist of those located in complex boundary, points with similar local textures but different categories, and points in isolate small hard areas.
We propose a novel Indistinguishable Area Focalization Network (IAF-Net), which selects indistinguishable points adaptively by utilizing the hierarchical semantic features.
Our IAF-Net achieves the comparable results with state-of-the-art performance on several popular 3D point cloud datasets.
- Score: 34.414363402029984
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper investigates the indistinguishable points (difficult to predict
label) in semantic segmentation for large-scale 3D point clouds. The
indistinguishable points consist of those located in complex boundary, points
with similar local textures but different categories, and points in isolate
small hard areas, which largely harm the performance of 3D semantic
segmentation. To address this challenge, we propose a novel Indistinguishable
Area Focalization Network (IAF-Net), which selects indistinguishable points
adaptively by utilizing the hierarchical semantic features and enhances
fine-grained features for points especially those indistinguishable points. We
also introduce multi-stage loss to improve the feature representation in a
progressive way. Moreover, in order to analyze the segmentation performances of
indistinguishable areas, we propose a new evaluation metric called
Indistinguishable Points Based Metric (IPBM). Our IAF-Net achieves the
comparable results with state-of-the-art performance on several popular 3D
point cloud datasets e.g. S3DIS and ScanNet, and clearly outperforms other
methods on IPBM.
Related papers
- Multi-modality Affinity Inference for Weakly Supervised 3D Semantic
Segmentation [47.81638388980828]
We propose a simple yet effective scene-level weakly supervised point cloud segmentation method with a newly introduced multi-modality point affinity inference module.
Our method outperforms the state-of-the-art by 4% to 6% mIoU on the ScanNet and S3DIS benchmarks.
arXiv Detail & Related papers (2023-12-27T14:01:35Z) - FreePoint: Unsupervised Point Cloud Instance Segmentation [72.64540130803687]
We propose FreePoint, for underexplored unsupervised class-agnostic instance segmentation on point clouds.
We represent point features by combining coordinates, colors, and self-supervised deep features.
Based on the point features, we segment point clouds into coarse instance masks as pseudo labels, which are used to train a point cloud instance segmentation model.
arXiv Detail & Related papers (2023-05-11T16:56:26Z) - PointResNet: Residual Network for 3D Point Cloud Segmentation and
Classification [18.466814193413487]
Point cloud segmentation and classification are some of the primary tasks in 3D computer vision.
In this paper, we propose PointResNet, a residual block-based approach.
Our model directly processes the 3D points, using a deep neural network for the segmentation and classification tasks.
arXiv Detail & Related papers (2022-11-20T17:39:48Z) - SASA: Semantics-Augmented Set Abstraction for Point-based 3D Object
Detection [78.90102636266276]
We propose a novel set abstraction method named Semantics-Augmented Set Abstraction (SASA)
Based on the estimated point-wise foreground scores, we then propose a semantics-guided point sampling algorithm to help retain more important foreground points during down-sampling.
In practice, SASA shows to be effective in identifying valuable points related to foreground objects and improving feature learning for point-based 3D detection.
arXiv Detail & Related papers (2022-01-06T08:54:47Z) - Background-Aware 3D Point Cloud Segmentationwith Dynamic Point Feature
Aggregation [12.093182949686781]
We propose a novel 3D point cloud learning network, referred to as Dynamic Point Feature Aggregation Network (DPFA-Net)
DPFA-Net has two variants for semantic segmentation and classification of 3D point clouds.
It achieves the state-of-the-art overall accuracy score for semantic segmentation on the S3DIS dataset.
arXiv Detail & Related papers (2021-11-14T05:46:05Z) - GSIP: Green Semantic Segmentation of Large-Scale Indoor Point Clouds [64.86292006892093]
GSIP (Green of Indoor Point clouds) is an efficient solution to semantic segmentation of large-scale indoor scene point clouds.
GSIP has two novel components: 1) a room-style data pre-processing method that selects a proper subset of points for further processing, and 2) a new feature extractor which is extended from PointHop.
Experiments show that GSIP outperforms PointNet in segmentation performance for the S3DIS dataset.
arXiv Detail & Related papers (2021-09-24T09:26:53Z) - SCSS-Net: Superpoint Constrained Semi-supervised Segmentation Network
for 3D Indoor Scenes [6.3364439467281315]
We propose a superpoint constrained semi-supervised segmentation network for 3D point clouds, named as SCSS-Net.
Specifically, we use the pseudo labels predicted from unlabeled point clouds for self-training, and the superpoints produced by geometry-based and color-based Region Growing algorithms are combined to modify and delete pseudo labels with low confidence.
arXiv Detail & Related papers (2021-07-08T04:43:21Z) - Few-shot 3D Point Cloud Semantic Segmentation [138.80825169240302]
We propose a novel attention-aware multi-prototype transductive few-shot point cloud semantic segmentation method.
Our proposed method shows significant and consistent improvements compared to baselines in different few-shot point cloud semantic segmentation settings.
arXiv Detail & Related papers (2020-06-22T08:05:25Z) - PointGroup: Dual-Set Point Grouping for 3D Instance Segmentation [111.7241018610573]
We present PointGroup, a new end-to-end bottom-up architecture for instance segmentation.
We design a two-branch network to extract point features and predict semantic labels and offsets, for shifting each point towards its respective instance centroid.
A clustering component is followed to utilize both the original and offset-shifted point coordinate sets, taking advantage of their complementary strength.
We conduct extensive experiments on two challenging datasets, ScanNet v2 and S3DIS, on which our method achieves the highest performance, 63.6% and 64.0%, compared to 54.9% and 54.4% achieved by former best
arXiv Detail & Related papers (2020-04-03T16:26:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.