CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale
Attention
- URL: http://arxiv.org/abs/2303.06908v2
- Date: Fri, 1 Dec 2023 02:13:22 GMT
- Title: CrossFormer++: A Versatile Vision Transformer Hinging on Cross-scale
Attention
- Authors: Wenxiao Wang, Wei Chen, Qibo Qiu, Long Chen, Boxi Wu, Binbin Lin,
Xiaofei He and Wei Liu
- Abstract summary: We propose a cross-scale vision transformer, CrossFormer.
It introduces a cross-scale embedding layer (CEL) and a long-short distance attention (LSDA)
- Score: 20.222118579325297
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While features of different scales are perceptually important to visual
inputs, existing vision transformers do not yet take advantage of them
explicitly. To this end, we first propose a cross-scale vision transformer,
CrossFormer. It introduces a cross-scale embedding layer (CEL) and a long-short
distance attention (LSDA). On the one hand, CEL blends each token with multiple
patches of different scales, providing the self-attention module itself with
cross-scale features. On the other hand, LSDA splits the self-attention module
into a short-distance one and a long-distance counterpart, which not only
reduces the computational burden but also keeps both small-scale and
large-scale features in the tokens. Moreover, through experiments on
CrossFormer, we observe another two issues that affect vision transformers'
performance, i.e., the enlarging self-attention maps and amplitude explosion.
Thus, we further propose a progressive group size (PGS) paradigm and an
amplitude cooling layer (ACL) to alleviate the two issues, respectively. The
CrossFormer incorporating with PGS and ACL is called CrossFormer++. Extensive
experiments show that CrossFormer++ outperforms the other vision transformers
on image classification, object detection, instance segmentation, and semantic
segmentation tasks. The code will be available at:
https://github.com/cheerss/CrossFormer.
Related papers
- Vision Backbone Enhancement via Multi-Stage Cross-Scale Attention [5.045944819606334]
Multi-Stage Cross-Scale Attention (MSCSA) module takes feature maps from different stages to enable multi-stage interactions.
MSCSA provides a significant performance boost with modest additional FLOPs and runtime.
arXiv Detail & Related papers (2023-08-10T22:57:31Z) - ViT-Calibrator: Decision Stream Calibration for Vision Transformer [49.60474757318486]
We propose a new paradigm dubbed Decision Stream that boosts the performance of general Vision Transformers.
We shed light on the information propagation mechanism in the learning procedure by exploring the correlation between different tokens and the relevance coefficient of multiple dimensions.
arXiv Detail & Related papers (2023-04-10T02:40:24Z) - Vision Transformer with Quadrangle Attention [76.35955924137986]
We propose a novel quadrangle attention (QA) method that extends the window-based attention to a general quadrangle formulation.
Our method employs an end-to-end learnable quadrangle regression module that predicts a transformation matrix to transform default windows into target quadrangles.
We integrate QA into plain and hierarchical vision transformers to create a new architecture named QFormer, which offers minor code modifications and negligible extra computational cost.
arXiv Detail & Related papers (2023-03-27T11:13:50Z) - Xformer: Hybrid X-Shaped Transformer for Image Denoising [114.37510775636811]
We present a hybrid X-shaped vision Transformer, named Xformer, which performs notably on image denoising tasks.
Xformer achieves state-of-the-art performance on the synthetic and real-world image denoising tasks.
arXiv Detail & Related papers (2023-03-11T16:32:09Z) - ParCNetV2: Oversized Kernel with Enhanced Attention [60.141606180434195]
We introduce a convolutional neural network architecture named ParCNetV2.
It extends position-aware circular convolution (ParCNet) with oversized convolutions and strengthens attention through bifurcate gate units.
Our method outperforms other pure convolutional neural networks as well as neural networks hybridizing CNNs and transformers.
arXiv Detail & Related papers (2022-11-14T07:22:55Z) - Multimodal Fusion Transformer for Remote Sensing Image Classification [35.57881383390397]
Vision transformers (ViTs) have been trending in image classification tasks due to their promising performance when compared to convolutional neural networks (CNNs)
To achieve satisfactory performance, close to that of CNNs, transformers need fewer parameters.
We introduce a new multimodal fusion transformer (MFT) network which comprises a multihead cross patch attention (mCrossPA) for HSI land-cover classification.
arXiv Detail & Related papers (2022-03-31T11:18:41Z) - MPViT: Multi-Path Vision Transformer for Dense Prediction [43.89623453679854]
Vision Transformers (ViTs) build a simple multi-stage structure for multi-scale representation with single-scale patches.
OuriTs scaling from tiny(5M) to base(73M) consistently achieve superior performance over state-of-the-art Vision Transformers.
arXiv Detail & Related papers (2021-12-21T06:34:50Z) - Shunted Self-Attention via Multi-Scale Token Aggregation [124.16925784748601]
Recent Vision Transformer(ViT) models have demonstrated encouraging results across various computer vision tasks.
We propose shunted self-attention(SSA) that allows ViTs to model the attentions at hybrid scales per attention layer.
The SSA-based transformer achieves 84.0% Top-1 accuracy and outperforms the state-of-the-art Focal Transformer on ImageNet.
arXiv Detail & Related papers (2021-11-30T08:08:47Z) - CrossFormer: A Versatile Vision Transformer Based on Cross-scale
Attention [37.39327010226153]
We propose Cross-scale Embedding Layer (CEL) and Long Short Distance Attention (LSDA)
CEL blends each embedding with multiple patches of different scales, providing the model with cross-scale embeddings.
LSDA splits the self-attention module into a short-distance and long-distance one, also lowering the cost but keeping both small-scale and large-scale features in embeddings.
arXiv Detail & Related papers (2021-07-31T05:52:21Z) - XCiT: Cross-Covariance Image Transformers [73.33400159139708]
We propose a "transposed" version of self-attention that operates across feature channels rather than tokens.
The resulting cross-covariance attention (XCA) has linear complexity in the number of tokens, and allows efficient processing of high-resolution images.
arXiv Detail & Related papers (2021-06-17T17:33:35Z) - CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image
Classification [17.709880544501758]
We propose a dual-branch transformer to combine image patches of different sizes to produce stronger image features.
Our approach processes small-patch and large-patch tokens with two separate branches of different computational complexity.
Our proposed cross-attention only requires linear time for both computational and memory complexity instead of quadratic time otherwise.
arXiv Detail & Related papers (2021-03-27T13:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.