Adaptive Point Transformer
- URL: http://arxiv.org/abs/2401.14845v1
- Date: Fri, 26 Jan 2024 13:24:45 GMT
- Title: Adaptive Point Transformer
- Authors: Alessandro Baiocchi, Indro Spinelli, Alessandro Nicolosi, Simone
Scardapane
- Abstract summary: Adaptive Point Cloud Transformer (AdaPT) is a standard PT model augmented by an adaptive token selection mechanism.
AdaPT dynamically reduces the number of tokens during inference, enabling efficient processing of large point clouds.
- Score: 88.28498667506165
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The recent surge in 3D data acquisition has spurred the development of
geometric deep learning models for point cloud processing, boosted by the
remarkable success of transformers in natural language processing. While point
cloud transformers (PTs) have achieved impressive results recently, their
quadratic scaling with respect to the point cloud size poses a significant
scalability challenge for real-world applications. To address this issue, we
propose the Adaptive Point Cloud Transformer (AdaPT), a standard PT model
augmented by an adaptive token selection mechanism. AdaPT dynamically reduces
the number of tokens during inference, enabling efficient processing of large
point clouds. Furthermore, we introduce a budget mechanism to flexibly adjust
the computational cost of the model at inference time without the need for
retraining or fine-tuning separate models. Our extensive experimental
evaluation on point cloud classification tasks demonstrates that AdaPT
significantly reduces computational complexity while maintaining competitive
accuracy compared to standard PTs. The code for AdaPT is made publicly
available.
Related papers
- CloudFixer: Test-Time Adaptation for 3D Point Clouds via Diffusion-Guided Geometric Transformation [33.07886526437753]
3D point clouds captured from real-world sensors frequently encompass noisy points due to various obstacles.
These challenges hinder the deployment of pre-trained point cloud recognition models trained on clean point clouds.
We propose CloudFixer, a test-time input adaptation method tailored for 3D point clouds.
arXiv Detail & Related papers (2024-07-23T05:35:04Z) - PointRWKV: Efficient RWKV-Like Model for Hierarchical Point Cloud Learning [56.14518823931901]
We present PointRWKV, a model of linear complexity derived from the RWKV model in the NLP field.
We first propose to explore the global processing capabilities within PointRWKV blocks using modified multi-headed matrix-valued states.
To extract local geometric features simultaneously, we design a parallel branch to encode the point cloud efficiently in a fixed radius near-neighbors graph with a graph stabilizer.
arXiv Detail & Related papers (2024-05-24T05:02:51Z) - Instance-aware Dynamic Prompt Tuning for Pre-trained Point Cloud Models [64.49254199311137]
We propose a novel Instance-aware Dynamic Prompt Tuning (IDPT) strategy for pre-trained point cloud models.
The essence of IDPT is to develop a dynamic prompt generation module to perceive semantic prior features of each point cloud instance.
In experiments, IDPT outperforms full fine-tuning in most tasks with a mere 7% of the trainable parameters.
arXiv Detail & Related papers (2023-04-14T16:03:09Z) - AdaPoinTr: Diverse Point Cloud Completion with Adaptive Geometry-Aware
Transformers [94.11915008006483]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We design a new model, called PoinTr, which adopts a Transformer encoder-decoder architecture for point cloud completion.
Our method attains 6.53 CD on PCN, 0.81 CD on ShapeNet-55 and 0.392 MMD on real-world KITTI.
arXiv Detail & Related papers (2023-01-11T16:14:12Z) - Point-BERT: Pre-training 3D Point Cloud Transformers with Masked Point
Modeling [104.82953953453503]
We present Point-BERT, a new paradigm for learning Transformers to generalize the concept of BERT to 3D point cloud.
Experiments demonstrate that the proposed BERT-style pre-training strategy significantly improves the performance of standard point cloud Transformers.
arXiv Detail & Related papers (2021-11-29T18:59:03Z) - Point Cloud Augmentation with Weighted Local Transformations [14.644850090688406]
We propose a simple and effective augmentation method called PointWOLF for point cloud augmentation.
The proposed method produces smoothly varying non-rigid deformations by locally weighted transformations centered at multiple anchor points.
AugTune generates augmented samples of desired difficulties producing targeted confidence scores.
arXiv Detail & Related papers (2021-10-11T16:11:26Z) - PoinTr: Diverse Point Cloud Completion with Geometry-Aware Transformers [81.71904691925428]
We present a new method that reformulates point cloud completion as a set-to-set translation problem.
We also design a new model, called PoinTr, that adopts a transformer encoder-decoder architecture for point cloud completion.
Our method outperforms state-of-the-art methods by a large margin on both the new benchmarks and the existing ones.
arXiv Detail & Related papers (2021-08-19T17:58:56Z) - Pct: Point cloud transformer [35.34343810480954]
This paper presents a novel framework named Point Cloud Transformer for point cloud learning.
PCT is based on Transformer, which achieves huge success in natural language processing.
It is inherently permutation invariant for processing a sequence of points, making it well-suited for point cloud learning.
arXiv Detail & Related papers (2020-12-17T15:55:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.