PSGformer: Enhancing 3D Point Cloud Instance Segmentation via Precise
Semantic Guidance
- URL: http://arxiv.org/abs/2307.07708v1
- Date: Sat, 15 Jul 2023 04:45:37 GMT
- Title: PSGformer: Enhancing 3D Point Cloud Instance Segmentation via Precise
Semantic Guidance
- Authors: Lei Pan, Wuyang Luan, Yuan Zheng, Qiang Fu, Junhui Li
- Abstract summary: PSGformer is a novel 3D instance segmentation network.
It incorporates two key advancements to enhance the performance of 3D instance segmentation.
It exceeds compared state-of-the-art methods by 2.2% on ScanNetv2 hidden test set in terms of mAP.
- Score: 11.097083846498581
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing 3D instance segmentation methods are derived from 3D semantic
segmentation models. However, these indirect approaches suffer from certain
limitations. They fail to fully leverage global and local semantic information
for accurate prediction, which hampers the overall performance of the 3D
instance segmentation framework. To address these issues, this paper presents
PSGformer, a novel 3D instance segmentation network. PSGformer incorporates two
key advancements to enhance the performance of 3D instance segmentation.
Firstly, we propose a Multi-Level Semantic Aggregation Module, which
effectively captures scene features by employing foreground point filtering and
multi-radius aggregation. This module enables the acquisition of more detailed
semantic information from global and local perspectives. Secondly, PSGformer
introduces a Parallel Feature Fusion Transformer Module that independently
processes super-point features and aggregated features using transformers. The
model achieves a more comprehensive feature representation by the features
which connect global and local features. We conducted extensive experiments on
the ScanNetv2 dataset. Notably, PSGformer exceeds compared state-of-the-art
methods by 2.2% on ScanNetv2 hidden test set in terms of mAP. Our code and
models will be publicly released.
Related papers
- View-Consistent Hierarchical 3D Segmentation Using Ultrametric Feature Fields [52.08335264414515]
We learn a novel feature field within a Neural Radiance Field (NeRF) representing a 3D scene.
Our method takes view-inconsistent multi-granularity 2D segmentations as input and produces a hierarchy of 3D-consistent segmentations as output.
We evaluate our method and several baselines on synthetic datasets with multi-view images and multi-granular segmentation, showcasing improved accuracy and viewpoint-consistency.
arXiv Detail & Related papers (2024-05-30T04:14:58Z) - SPGroup3D: Superpoint Grouping Network for Indoor 3D Object Detection [23.208654655032955]
Current 3D object detection methods for indoor scenes mainly follow the voting-and-grouping strategy to generate proposals.
We propose a novel superpoint grouping network for indoor anchor-free one-stage 3D object detection.
Experimental results demonstrate our method achieves state-of-the-art performance on ScanNet V2, SUN RGB-D, and S3DIS datasets.
arXiv Detail & Related papers (2023-12-21T08:08:02Z) - Prototype Adaption and Projection for Few- and Zero-shot 3D Point Cloud
Semantic Segmentation [30.18333233940194]
We address the challenging task of few-shot and zero-shot 3D point cloud semantic segmentation.
Our proposed method surpasses state-of-the-art algorithms by a considerable 7.90% and 14.82% under the 2-way 1-shot setting on S3DIS and ScanNet benchmarks, respectively.
arXiv Detail & Related papers (2023-05-23T17:58:05Z) - Position-Guided Point Cloud Panoptic Segmentation Transformer [118.17651196656178]
This work begins by applying this appealing paradigm to LiDAR-based point cloud segmentation and obtains a simple yet effective baseline.
We observe that instances in the sparse point clouds are relatively small to the whole scene and often have similar geometry but lack distinctive appearance for segmentation, which are rare in the image domain.
The method, named Position-guided Point cloud Panoptic segmentation transFormer (P3Former), outperforms previous state-of-the-art methods by 3.4% and 1.2% on Semantic KITTI and nuScenes benchmark, respectively.
arXiv Detail & Related papers (2023-03-23T17:59:02Z) - Part-guided Relational Transformers for Fine-grained Visual Recognition [59.20531172172135]
We propose a framework to learn the discriminative part features and explore correlations with a feature transformation module.
Our proposed approach does not rely on additional part branches and reaches state-the-of-art performance on 3-of-the-level object recognition.
arXiv Detail & Related papers (2022-12-28T03:45:56Z) - Superpoint Transformer for 3D Scene Instance Segmentation [7.07321040534471]
This paper proposes a novel end-to-end 3D instance segmentation method based on Superpoint Transformer, named as SPFormer.
It groups potential features from point clouds into superpoints, and directly predicts instances through query vectors.
It exceeds compared state-of-the-art methods by 4.3% on ScanNetv2 hidden test set in terms of mAP and keeps fast inference speed (247ms per frame) simultaneously.
arXiv Detail & Related papers (2022-11-28T20:52:53Z) - CAGroup3D: Class-Aware Grouping for 3D Object Detection on Point Clouds [55.44204039410225]
We present a novel two-stage fully sparse convolutional 3D object detection framework, named CAGroup3D.
Our proposed method first generates some high-quality 3D proposals by leveraging the class-aware local group strategy on the object surface voxels.
To recover the features of missed voxels due to incorrect voxel-wise segmentation, we build a fully sparse convolutional RoI pooling module.
arXiv Detail & Related papers (2022-10-09T13:38:48Z) - LATFormer: Locality-Aware Point-View Fusion Transformer for 3D Shape
Recognition [38.540048855119004]
We propose a novel Locality-Aware Point-View Fusion Transformer (LATFormer) for 3D shape retrieval and classification.
The core component of LATFormer is a module named Locality-Aware Fusion (LAF) which integrates the local features of correlated regions across the two modalities.
In our LATFormer, we utilize the LAF module to fuse the multi-scale features of the two modalities both bidirectionally and hierarchically to obtain more informative features.
arXiv Detail & Related papers (2021-09-03T03:23:27Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - Similarity-Aware Fusion Network for 3D Semantic Segmentation [87.51314162700315]
We propose a similarity-aware fusion network (SAFNet) to adaptively fuse 2D images and 3D point clouds for 3D semantic segmentation.
We employ a late fusion strategy where we first learn the geometric and contextual similarities between the input and back-projected (from 2D pixels) point clouds.
We show that SAFNet significantly outperforms existing state-of-the-art fusion-based approaches across various data integrity.
arXiv Detail & Related papers (2021-07-04T09:28:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.