FlatFormer: Flattened Window Attention for Efficient Point Cloud
Transformer
- URL: http://arxiv.org/abs/2301.08739v3
- Date: Fri, 14 Jul 2023 18:57:30 GMT
- Title: FlatFormer: Flattened Window Attention for Efficient Point Cloud
Transformer
- Authors: Zhijian Liu, Xinyu Yang, Haotian Tang, Shang Yang, Song Han
- Abstract summary: Transformer, as an alternative to CNN, has been proven effective in many modalities.
This paper presents FlatFormer to close this latency gap by trading spatial proximity for better computational regularity.
- Score: 30.596658616831945
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer, as an alternative to CNN, has been proven effective in many
modalities (e.g., texts and images). For 3D point cloud transformers, existing
efforts focus primarily on pushing their accuracy to the state-of-the-art
level. However, their latency lags behind sparse convolution-based models (3x
slower), hindering their usage in resource-constrained, latency-sensitive
applications (such as autonomous driving). This inefficiency comes from point
clouds' sparse and irregular nature, whereas transformers are designed for
dense, regular workloads. This paper presents FlatFormer to close this latency
gap by trading spatial proximity for better computational regularity. We first
flatten the point cloud with window-based sorting and partition points into
groups of equal sizes rather than windows of equal shapes. This effectively
avoids expensive structuring and padding overheads. We then apply
self-attention within groups to extract local features, alternate sorting axis
to gather features from different directions, and shift windows to exchange
features across groups. FlatFormer delivers state-of-the-art accuracy on Waymo
Open Dataset with 4.6x speedup over (transformer-based) SST and 1.4x speedup
over (sparse convolutional) CenterPoint. This is the first point cloud
transformer that achieves real-time performance on edge GPUs and is faster than
sparse convolutional methods while achieving on-par or even superior accuracy
on large-scale benchmarks.
Related papers
- ConDaFormer: Disassembled Transformer with Local Structure Enhancement
for 3D Point Cloud Understanding [105.98609765389895]
Transformers have been recently explored for 3D point cloud understanding.
A large number of points, over 0.1 million, make the global self-attention infeasible for point cloud data.
In this paper, we develop a new transformer block, named ConDaFormer.
arXiv Detail & Related papers (2023-12-18T11:19:45Z) - Point Cloud Classification Using Content-based Transformer via
Clustering in Feature Space [25.57569871876213]
We propose a point content-based Transformer architecture, called PointConT for short.
It exploits the locality of points in the feature space (content-based), which clusters the sampled points with similar features into the same class and computes the self-attention within each class.
We also introduce an Inception feature aggregator for point cloud classification, which uses parallel structures to aggregate high-frequency and low-frequency information in each branch separately.
arXiv Detail & Related papers (2023-03-08T14:11:05Z) - Using a Waffle Iron for Automotive Point Cloud Semantic Segmentation [66.6890991207065]
Sparse 3D convolutions have become the de-facto tools to construct deep neural networks.
We propose an alternative method that reaches the level of state-of-the-art methods without requiring sparse convolutions.
We show that such level of performance is achievable by relying on tools a priori unfit for large scale and high-performing 3D perception.
arXiv Detail & Related papers (2023-01-24T16:10:08Z) - DSVT: Dynamic Sparse Voxel Transformer with Rotated Sets [95.84755169585492]
We present Dynamic Sparse Voxel Transformer (DSVT), a single-stride window-based voxel Transformer backbone for outdoor 3D perception.
Our model achieves state-of-the-art performance with a broad range of 3D perception tasks.
arXiv Detail & Related papers (2023-01-15T09:31:58Z) - CloudAttention: Efficient Multi-Scale Attention Scheme For 3D Point
Cloud Learning [81.85951026033787]
We set transformers in this work and incorporate them into a hierarchical framework for shape classification and part and scene segmentation.
We also compute efficient and dynamic global cross attentions by leveraging sampling and grouping at each iteration.
The proposed hierarchical model achieves state-of-the-art shape classification in mean accuracy and yields results on par with the previous segmentation methods.
arXiv Detail & Related papers (2022-07-31T21:39:15Z) - TorchSparse: Efficient Point Cloud Inference Engine [24.541195361633523]
We introduce TorchSparse, a high-performance point cloud inference engine.
TorchSparse directly optimize the two bottlenecks of sparse convolution: irregular computation and data movement.
It achieves 1.6x and 1.5x measured end-to-end speedup over the state-of-the-art MinkowskiEngine and SpConv, respectively.
arXiv Detail & Related papers (2022-04-21T17:58:30Z) - Stratified Transformer for 3D Point Cloud Segmentation [89.9698499437732]
Stratified Transformer is able to capture long-range contexts and demonstrates strong generalization ability and high performance.
To combat the challenges posed by irregular point arrangements, we propose first-layer point embedding to aggregate local information.
Experiments demonstrate the effectiveness and superiority of our method on S3DIS, ScanNetv2 and ShapeNetPart datasets.
arXiv Detail & Related papers (2022-03-28T05:35:16Z) - Attention-based Transformation from Latent Features to Point Clouds [6.547680885781582]
AXform is an attention-based method to transform latent features to point clouds.
It takes both parameter sharing and data flow into account, which makes it has fewer outliers, fewer network parameters, and a faster convergence speed.
AXform does not have the strong 2-manifold constraint, which improves the generation of non-smooth surfaces.
Considerable experiments on different datasets show that our methods achieve state-of-the-art results.
arXiv Detail & Related papers (2021-12-10T03:59:04Z) - Dynamic Convolution for 3D Point Cloud Instance Segmentation [146.7971476424351]
We propose an approach to instance segmentation from 3D point clouds based on dynamic convolution.
We gather homogeneous points that have identical semantic categories and close votes for the geometric centroids.
The proposed approach is proposal-free, and instead exploits a convolution process that adapts to the spatial and semantic characteristics of each instance.
arXiv Detail & Related papers (2021-07-18T09:05:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.