Stratified Transformer for 3D Point Cloud Segmentation
- URL: http://arxiv.org/abs/2203.14508v1
- Date: Mon, 28 Mar 2022 05:35:16 GMT
- Title: Stratified Transformer for 3D Point Cloud Segmentation
- Authors: Xin Lai, Jianhui Liu, Li Jiang, Liwei Wang, Hengshuang Zhao, Shu Liu,
Xiaojuan Qi, Jiaya Jia
- Abstract summary: Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
- Score: 89.9698499437732
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: 3D point cloud segmentation has made tremendous progress in recent years.
Most current methods focus on aggregating local features, but fail to directly
model long-range dependencies. In this paper, we propose Stratified Transformer
that is able to capture long-range contexts and demonstrates strong
generalization ability and high performance. Specifically, we first put forward
a novel key sampling strategy. For each query point, we sample nearby points
densely and distant points sparsely as its keys in a stratified way, which
enables the model to enlarge the effective receptive field and enjoy long-range
contexts at a low computational cost. Also, to combat the challenges posed by
irregular point arrangements, we propose first-layer point embedding to
aggregate local information, which facilitates convergence and boosts
performance. Besides, we adopt contextual relative position encoding to
adaptively capture position information. Finally, a memory-efficient
implementation is introduced to overcome the issue of varying point numbers in
each window. Extensive experiments demonstrate the effectiveness and
superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets. Code
is available at https://github.com/dvlab-research/Stratified-Transformer.
Related papers
- ConDaFormer: Disassembled Transformer with Local Structure Enhancement
for 3D Point Cloud Understanding [105.98609765389895]
Transformers have been recently explored for 3D point cloud understanding.
A large number of points, over 0.1 million, make the global self-attention infeasible for point cloud data.
In this paper, we develop a new transformer block, named ConDaFormer.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - PVT-SSD: Single-Stage 3D Object Detector with Point-Voxel Transformer [75.2251801053839]
We present a novel Point-Voxel Transformer for single-stage 3D detection (PVT-SSD)
We propose a Point-Voxel Transformer (PVT) module that obtains long-range contexts in a cheap manner from voxels.
The experiments on several autonomous driving benchmarks verify the effectiveness and efficiency of the proposed method.
arXiv Detail & Related papers (2023-05-11T07:37:15Z) - Point Cloud Classification Using Content-based Transformer via
Clustering in Feature Space [25.57569871876213]
We propose a point content-based Transformer architecture, called PointConT for short.
It exploits the locality of points in the feature space (content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class.
We also introduce an Inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately.
arXiv Detail & Related papers (2023-03-08T14:11:05Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z) - Learning Semantic Segmentation of Large-Scale Point Clouds with Random
Sampling [52.464516118826765]
We introduce RandLA-Net, an efficient and lightweight neural architecture to infer per-point semantics for large-scale point clouds.
The key to our approach is to use random point sampling instead of more complex point selection approaches.
Our RandLA-Net can process 1 million points in a single pass up to 200x faster than existing approaches.
arXiv Detail & Related papers (2021-07-06T05:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.