CrossFormer: A Versatile Vision Transformer Based on Cross-scale
Attention
- URL: http://arxiv.org/abs/2108.00154v1
- Date: Sat, 31 Jul 2021 05:52:21 GMT
- Title: CrossFormer: A Versatile Vision Transformer Based on Cross-scale
Attention
- Authors: Wenxiao Wang, Lu Yao, Long Chen, Deng Cai, Xiaofei He and Wei Liu
- Abstract summary: We propose Cross-scale Embedding Layer (CEL) and Long Short Distance Attention (LSDA)
CEL blends each embedding with multiple patches of different scales, providing the model with cross-scale embeddings.
LSDA splits the self-attention module into a short-distance and long-distance one, also lowering the cost but keeping both small-scale and large-scale features in embeddings.
- Score: 37.39327010226153
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformers have made much progress in dealing with visual tasks. However,
existing vision transformers still do not possess an ability that is important
to visual input: building the attention among features of different scales. The
reasons for this problem are two-fold: (1) Input embeddings of each layer are
equal-scale without cross-scale features; (2) Some vision transformers
sacrifice the small-scale features of embeddings to lower the cost of the
self-attention module. To make up this defect, we propose Cross-scale Embedding
Layer (CEL) and Long Short Distance Attention (LSDA). In particular, CEL blends
each embedding with multiple patches of different scales, providing the model
with cross-scale embeddings. LSDA splits the self-attention module into a
short-distance and long-distance one, also lowering the cost but keeping both
small-scale and large-scale features in embeddings. Through these two designs,
we achieve cross-scale attention. Besides, we propose dynamic position bias for
vision transformers to make the popular relative position bias apply to
variable-sized images. Based on these proposed modules, we construct our vision
architecture called CrossFormer. Experiments show that CrossFormer outperforms
other transformers on several representative visual tasks, especially object
detection and segmentation. The code has been released:
https://github.com/cheerss/CrossFormer.
Related papers
- Attention Deficit is Ordered! Fooling Deformable Vision Transformers
with Collaborative Adversarial Patches [3.4673556247932225]
Deformable vision transformers significantly reduce the complexity of attention modeling.
Recent work has demonstrated adversarial attacks against conventional vision transformers.
We develop new collaborative attacks where a source patch manipulates attention to point to a target patch, which contains the adversarial noise to fool the model.
arXiv Detail & Related papers (2023-11-21T17:55:46Z) - CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale
Attention [20.222118579325297]
We propose a cross-scale vision transformer, CrossFormer.
It introduces a cross-scale embedding layer (CEL) and a long-short distance attention (LSDA)
arXiv Detail & Related papers (2023-03-13T07:54:29Z) - Vicinity Vision Transformer [53.43198716947792]
We present a Vicinity Attention that introduces a locality bias to vision transformers with linear complexity.
Our approach achieves state-of-the-art image classification accuracy with 50% fewer parameters than previous methods.
arXiv Detail & Related papers (2022-06-21T17:33:53Z) - Vision Transformer with Deformable Attention [29.935891419574602]
Large, sometimes even global, receptive field endows Transformer models with higher representation power over their CNN counterparts.
We propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way.
We present Deformable Attention Transformer, a general backbone model with deformable attention for both image classification and dense prediction tasks.
arXiv Detail & Related papers (2022-01-03T08:29:01Z) - MPViT: Multi-Path Vision Transformer for Dense Prediction [43.89623453679854]
Vision Transformers (ViTs) build a simple multi-stage structure for multi-scale representation with single-scale patches.
OuriTs scaling from tiny(5M) to base(73M) consistently achieve superior performance over state-of-the-art Vision Transformers.
arXiv Detail & Related papers (2021-12-21T06:34:50Z) - Shunted Self-Attention via Multi-Scale Token Aggregation [124.16925784748601]
Recent Vision Transformer(ViT) models have demonstrated encouraging results across various computer vision tasks.
We propose shunted self-attention(SSA) that allows ViTs to model the attentions at hybrid scales per attention layer.
The SSA-based transformer achieves 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet.
arXiv Detail & Related papers (2021-11-30T08:08:47Z) - ViTAE: Vision Transformer Advanced by Exploring Intrinsic Inductive Bias [76.16156833138038]
We propose a novel Vision Transformer Advanced by Exploring intrinsic IB from convolutions, ie, ViTAE.
ViTAE has several spatial pyramid reduction modules to downsample and embed the input image into tokens with rich multi-scale context.
In each transformer layer, ViTAE has a convolution block in parallel to the multi-head self-attention module, whose features are fused and fed into the feed-forward network.
arXiv Detail & Related papers (2021-06-07T05:31:06Z) - Multiscale Vision Transformers [79.76412415996892]
We present Multiscale Vision Transformers (MViT) for video and image recognition, by connecting the seminal idea of multiscale feature hierarchies with transformer models.
We evaluate this fundamental architectural prior for modeling the dense nature of visual signals for a variety of video recognition tasks.
arXiv Detail & Related papers (2021-04-22T17:59:45Z) - Multi-Scale Vision Longformer: A New Vision Transformer for
High-Resolution Image Encoding [81.07894629034767]
This paper presents a new Vision Transformer (ViT) architecture Multi-Scale Vision Longformer.
It significantly enhances the ViT of citedosovitskiy 2020image for encoding high-resolution images using two techniques.
arXiv Detail & Related papers (2021-03-29T06:23:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.