Vision Transformer with Super Token Sampling
- URL: http://arxiv.org/abs/2211.11167v2
- Date: Thu, 25 Jan 2024 08:23:42 GMT
- Title: Vision Transformer with Super Token Sampling
- Authors: Huaibo Huang, Xiaoqiang Zhou, Jie Cao, Ran He, Tieniu Tan
- Abstract summary: Vision transformer has achieved impressive performance for many vision tasks.
It may suffer from high redundancy in capturing local features for shallow layers.
Super tokens attempt to provide a semantically meaningful tessellation of visual content.
- Score: 93.70963123497327
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision transformer has achieved impressive performance for many vision tasks.
However, it may suffer from high redundancy in capturing local features for
shallow layers. Local self-attention or early-stage convolutions are thus
utilized, which sacrifice the capacity to capture long-range dependency. A
challenge then arises: can we access efficient and effective global context
modeling at the early stages of a neural network? To address this issue, we
draw inspiration from the design of superpixels, which reduces the number of
image primitives in subsequent processing, and introduce super tokens into
vision transformer. Super tokens attempt to provide a semantically meaningful
tessellation of visual content, thus reducing the token number in
self-attention as well as preserving global modeling. Specifically, we propose
a simple yet strong super token attention (STA) mechanism with three steps: the
first samples super tokens from visual tokens via sparse association learning,
the second performs self-attention on super tokens, and the last maps them back
to the original token space. STA decomposes vanilla global attention into
multiplications of a sparse association map and a low-dimensional attention,
leading to high efficiency in capturing global dependencies. Based on STA, we
develop a hierarchical vision transformer. Extensive experiments demonstrate
its strong performance on various vision tasks. In particular, without any
extra training data or label, it achieves 86.4% top-1 accuracy on ImageNet-1K
with less than 100M parameters. It also achieves 53.9 box AP and 46.8 mask AP
on the COCO detection task, and 51.9 mIOU on the ADE20K semantic segmentation
task. Code is released at https://github.com/hhb072/STViT.
Related papers
- SG-Former: Self-guided Transformer with Evolving Token Reallocation [89.9363449724261]
We propose a novel model, termed as Self-guided Transformer, towards effective global self-attention with adaptive fine granularity.
We assign more tokens to the salient regions for achieving fine-grained attention, while allocating fewer tokens to the minor regions in exchange for efficiency and global receptive fields.
The proposed SG-Former achieves superior performance superior to state of the art: our base size model achieves textbf84.7% Top-1 accuracy on ImageNet-1K, textbf51.2mAP BBAP on CoCo, textbf52.7mIoU
arXiv Detail & Related papers (2023-08-23T15:52:45Z) - Making Vision Transformers Efficient from A Token Sparsification View [26.42498120556985]
We propose a novel Semantic Token ViT (STViT) for efficient global and local vision transformers.
Our method can achieve competitive results compared to the original networks in object detection and instance segmentation, with over 30% FLOPs reduction for backbone.
In addition, we design a STViT-R(ecover) network to restore the detailed spatial information based on the STViT, making it work for downstream tasks.
arXiv Detail & Related papers (2023-03-15T15:12:36Z) - Not All Tokens Are Equal: Human-centric Visual Analysis via Token
Clustering Transformer [91.49837514935051]
We propose a novel Vision Transformer, called Token Clustering Transformer (TCFormer)
TCFormer merges tokens by progressive clustering, where the tokens can be merged from different locations with flexible shapes and sizes.
Experiments show that TCFormer consistently outperforms its counterparts on different challenging human-centric tasks and datasets.
arXiv Detail & Related papers (2022-04-19T05:38:16Z) - UniFormer: Unifying Convolution and Self-attention for Visual
Recognition [69.68907941116127]
Convolution neural networks (CNNs) and vision transformers (ViTs) have been two dominant frameworks in the past few years.
We propose a novel Unified transFormer (UniFormer) which seamlessly integrates the merits of convolution and self-attention in a concise transformer format.
Our UniFormer achieves 86.3 top-1 accuracy on ImageNet-1K classification.
arXiv Detail & Related papers (2022-01-24T04:39:39Z) - Shunted Self-Attention via Multi-Scale Token Aggregation [124.16925784748601]
Recent Vision Transformer(ViT) models have demonstrated encouraging results across various computer vision tasks.
We propose shunted self-attention(SSA) that allows ViTs to model the attentions at hybrid scales per attention layer.
The SSA-based transformer achieves 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet.
arXiv Detail & Related papers (2021-11-30T08:08:47Z) - DynamicViT: Efficient Vision Transformers with Dynamic Token
Sparsification [134.9393799043401]
We propose a dynamic token sparsification framework to prune redundant tokens based on the input.
By hierarchically pruning 66% of the input tokens, our method greatly reduces 31%37% FLOPs and improves the throughput by over 40%.
DynamicViT models can achieve very competitive complexity/accuracy trade-offs compared to state-of-the-art CNNs and vision transformers on ImageNet.
arXiv Detail & Related papers (2021-06-03T17:57:41Z) - KVT: k-NN Attention for Boosting Vision Transformers [44.189475770152185]
We propose a sparse attention scheme, dubbed k-NN attention, for boosting vision transformers.
The proposed k-NN attention naturally inherits the local bias of CNNs without introducing convolutional operations.
We verify, both theoretically and empirically, that $k$-NN attention is powerful in distilling noise from input tokens and in speeding up training.
arXiv Detail & Related papers (2021-05-28T06:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.