A Close Look at Spatial Modeling: From Attention to Convolution
- URL: http://arxiv.org/abs/2212.12552v1
- Date: Fri, 23 Dec 2022 19:13:43 GMT
- Title: A Close Look at Spatial Modeling: From Attention to Convolution
- Authors: Xu Ma, Huan Wang, Can Qin, Kunpeng Li, Xingchen Zhao, Jie Fu, Yun Fu
- Abstract summary: Vision Transformers have shown great promise recently for many vision tasks due to the insightful architecture design and attention mechanism.
We generalize self-attention formulation to abstract a queryirrelevant global context directly and integrate the global context into convolutions.
With less than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7% top1 accuracy on ImageNet-1K.
- Score: 70.5571582194057
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision Transformers have shown great promise recently for many vision tasks
due to the insightful architecture design and attention mechanism. By
revisiting the self-attention responses in Transformers, we empirically observe
two interesting issues. First, Vision Transformers present a queryirrelevant
behavior at deep layers, where the attention maps exhibit nearly consistent
contexts in global scope, regardless of the query patch position (also
head-irrelevant). Second, the attention maps are intrinsically sparse, few
tokens dominate the attention weights; introducing the knowledge from ConvNets
would largely smooth the attention and enhance the performance. Motivated by
above observations, we generalize self-attention formulation to abstract a
queryirrelevant global context directly and further integrate the global
context into convolutions. The resulting model, a Fully Convolutional Vision
Transformer (i.e., FCViT), purely consists of convolutional layers and firmly
inherits the merits of both attention mechanism and convolutions, including
dynamic property, weight sharing, and short- and long-range feature modeling,
etc. Experimental results demonstrate the effectiveness of FCViT. With less
than 14M parameters, our FCViT-S12 outperforms related work ResT-Lite by 3.7%
top1 accuracy on ImageNet-1K. When scaling FCViT to larger models, we still
perform better than previous state-of-the-art ConvNeXt with even fewer
parameters. FCViT-based models also demonstrate promising transferability to
downstream tasks, like object detection, instance segmentation, and semantic
segmentation. Codes and models are made available at:
https://github.com/ma-xu/FCViT.
Related papers
- CAS-ViT: Convolutional Additive Self-attention Vision Transformers for Efficient Mobile Applications [59.193626019860226]
Vision Transformers (ViTs) mark a revolutionary advance in neural networks with their token mixer's powerful global context capability.
We introduce CAS-ViT: Convolutional Additive Self-attention Vision Transformers.
We show that CAS-ViT achieves a competitive performance when compared to other state-of-the-art backbones.
arXiv Detail & Related papers (2024-08-07T11:33:46Z) - ACC-ViT : Atrous Convolution's Comeback in Vision Transformers [5.224344210588584]
We introduce Atrous Attention, a fusion of regional and sparse attention, which can adaptively consolidate both local and global information.
We also propose a general vision transformer backbone, named ACC-ViT, following conventional practices for standard vision tasks.
ACC-ViT is therefore a strong vision backbone, which is also competitive in mobile-scale versions, ideal for niche applications with small datasets.
arXiv Detail & Related papers (2024-03-07T04:05:16Z) - DAT++: Spatially Dynamic Vision Transformer with Deformable Attention [87.41016963608067]
We present Deformable Attention Transformer ( DAT++), a vision backbone efficient and effective for visual recognition.
DAT++ achieves state-of-the-art results on various visual recognition benchmarks, with 85.9% ImageNet accuracy, 54.5 and 47.0 MS-COCO instance segmentation mAP, and 51.5 ADE20K semantic segmentation mIoU.
arXiv Detail & Related papers (2023-09-04T08:26:47Z) - Lightweight Vision Transformer with Bidirectional Interaction [63.65115590184169]
We propose a Fully Adaptive Self-Attention (FASA) mechanism for vision transformer to model the local and global information.
Based on FASA, we develop a family of lightweight vision backbones, Fully Adaptive Transformer (FAT) family.
arXiv Detail & Related papers (2023-06-01T06:56:41Z) - Understanding The Robustness in Vision Transformers [140.1090560977082]
Self-attention may promote robustness through improved mid-level representations.
We propose a family of fully attentional networks (FANs) that strengthen this capability.
Our model achieves a state of-the-art 87.1% accuracy and 35.8% mCE on ImageNet-1k and ImageNet-C with 76.8M parameters.
arXiv Detail & Related papers (2022-04-26T17:16:32Z) - BViT: Broad Attention based Vision Transformer [13.994231768182907]
We propose the broad attention to improve the performance by incorporating the attention relationship of different layers for vision transformer, which is called BViT.
Experiments on image classification tasks demonstrate that BViT delivers state-of-the-art accuracy of 74.8%/81.6% top-1 accuracy on ImageNet with 5M/22M parameters.
arXiv Detail & Related papers (2022-02-13T09:23:29Z) - Vision Transformer with Deformable Attention [29.935891419574602]
Large, sometimes even global, receptive field endows Transformer models with higher representation power over their CNN counterparts.
We propose a novel deformable self-attention module, where the positions of key and value pairs in self-attention are selected in a data-dependent way.
We present Deformable Attention Transformer, a general backbone model with deformable attention for both image classification and dense prediction tasks.
arXiv Detail & Related papers (2022-01-03T08:29:01Z) - CvT: Introducing Convolutions to Vision Transformers [44.74550305869089]
Convolutional vision Transformer (CvT) improves Vision Transformer (ViT) in performance and efficiency.
New architecture introduces convolutions into ViT to yield the best of both designs.
arXiv Detail & Related papers (2021-03-29T17:58:22Z) - DeepViT: Towards Deeper Vision Transformer [92.04063170357426]
Vision transformers (ViTs) have been successfully applied in image classification tasks recently.
We show that, unlike convolution neural networks (CNNs)that can be improved by stacking more convolutional layers, the performance of ViTs saturate fast when scaled to be deeper.
We propose a simple yet effective method, named Re-attention, to re-generate the attention maps to increase their diversity.
arXiv Detail & Related papers (2021-03-22T14:32:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.