Swin-Free: Achieving Better Cross-Window Attention and Efficiency with
Size-varying Window
- URL: http://arxiv.org/abs/2306.13776v1
- Date: Fri, 23 Jun 2023 20:19:58 GMT
- Title: Swin-Free: Achieving Better Cross-Window Attention and Efficiency with
Size-varying Window
- Authors: Jinkyu Koo, John Yang, Le An, Gwenaelle Cunha Sergio, Su Inn Park
- Abstract summary: We propose Swin-Free in which we apply size-varying windows across stages, instead of shifting windows, to achieve cross-connection among local windows.
With this simple design change, Swin-Free runs faster than the Swin Transformer at inference with better accuracy.
- Score: 6.158271948005819
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformer models have shown great potential in computer vision, following
their success in language tasks. Swin Transformer is one of them that
outperforms convolution-based architectures in terms of accuracy, while
improving efficiency when compared to Vision Transformer (ViT) and its
variants, which have quadratic complexity with respect to the input size. Swin
Transformer features shifting windows that allows cross-window connection while
limiting self-attention computation to non-overlapping local windows. However,
shifting windows introduces memory copy operations, which account for a
significant portion of its runtime. To mitigate this issue, we propose
Swin-Free in which we apply size-varying windows across stages, instead of
shifting windows, to achieve cross-connection among local windows. With this
simple design change, Swin-Free runs faster than the Swin Transformer at
inference with better accuracy. Furthermore, we also propose a few of Swin-Free
variants that are faster than their Swin Transformer counterparts.
Related papers
- HiT-SR: Hierarchical Transformer for Efficient Image Super-Resolution [70.52256118833583]
We present a strategy to convert transformer-based SR networks to hierarchical transformers (HiT-SR)
Specifically, we first replace the commonly used fixed small windows with expanding hierarchical windows to aggregate features at different scales.
Considering the intensive computation required for large windows, we further design a spatial-channel correlation method with linear complexity to window sizes.
arXiv Detail & Related papers (2024-07-08T12:42:10Z) - CageViT: Convolutional Activation Guided Efficient Vision Transformer [90.69578999760206]
This paper presents an efficient vision Transformer, called CageViT, that is guided by convolutional activation to reduce computation.
Our CageViT, unlike current Transformers, utilizes a new encoder to handle the rearranged tokens.
Experimental results demonstrate that the proposed CageViT outperforms the most recent state-of-the-art backbones by a large margin in terms of efficiency.
arXiv Detail & Related papers (2023-05-17T03:19:18Z) - Degenerate Swin to Win: Plain Window-based Transformer without
Sophisticated Operations [36.57766081271396]
A Vision Transformer has a larger receptive field which is capable of characterizing the long-range dependencies.
To boost efficiency, the window-based Vision Transformers emerge.
We check the necessity of the key design element of Swin Transformer, the shifted window partitioning.
arXiv Detail & Related papers (2022-11-25T17:36:20Z) - SSformer: A Lightweight Transformer for Semantic Segmentation [7.787950060560868]
Swin Transformer set a new record in various vision tasks by using hierarchical architecture and shifted windows.
We design a lightweight yet effective transformer model, called SSformer.
Experimental results show the proposed SSformer yields comparable mIoU performance with state-of-the-art models.
arXiv Detail & Related papers (2022-08-03T12:57:00Z) - Towards Lightweight Transformer via Group-wise Transformation for
Vision-and-Language Tasks [126.33843752332139]
We introduce Group-wise Transformation towards a universal yet lightweight Transformer for vision-and-language tasks, termed as LW-Transformer.
We apply LW-Transformer to a set of Transformer-based networks, and quantitatively measure them on three vision-and-language tasks and six benchmark datasets.
Experimental results show that while saving a large number of parameters and computations, LW-Transformer achieves very competitive performance against the original Transformer networks for vision-and-language tasks.
arXiv Detail & Related papers (2022-04-16T11:30:26Z) - Lawin Transformer: Improving Semantic Segmentation Transformer with
Multi-Scale Representations via Large Window Attention [16.75003034164463]
Multi-scale representations are crucial for semantic segmentation.
In this paper, we introduce multi-scale representations into semantic segmentation ViT via window attention mechanism.
Our resulting ViT, Lawin Transformer, is composed of an efficient vision transformer (HVT) as encoder and a LawinASPP as decoder.
arXiv Detail & Related papers (2022-01-05T13:51:20Z) - HRFormer: High-Resolution Transformer for Dense Prediction [99.6060997466614]
We present a High-Resolution Transformer (HRFormer) that learns high-resolution representations for dense prediction tasks.
We take advantage of the multi-resolution parallel design introduced in high-resolution convolutional networks (HRNet)
We demonstrate the effectiveness of the High-Resolution Transformer on both human pose estimation and semantic segmentation tasks.
arXiv Detail & Related papers (2021-10-18T15:37:58Z) - Evo-ViT: Slow-Fast Token Evolution for Dynamic Vision Transformer [63.99222215387881]
We propose Evo-ViT, a self-motivated slow-fast token evolution method for vision transformers.
Our method can significantly reduce the computational costs of vision transformers while maintaining comparable performance on image classification.
arXiv Detail & Related papers (2021-08-03T09:56:07Z) - What Makes for Hierarchical Vision Transformer? [46.848348453909495]
We replace self-attention layers in Swin Transformer and Shuffle Transformer with simple linear mapping and keep other components unchanged.
The resulting architecture with 25.4M parameters and 4.2G FLOPs achieves 80.5% Top-1 accuracy, compared to 81.3% for Swin Transformer with 28.3M parameters and 4.5G FLOPs.
arXiv Detail & Related papers (2021-07-05T17:59:35Z) - Swin Transformer: Hierarchical Vision Transformer using Shifted Windows [44.086393272557416]
This paper presents a new vision Transformer, called Swin Transformer, that capably serves as a general-purpose backbone for computer vision.
It surpasses the previous state-of-the-art by a large margin of +2.7 box AP and +2.6 mask AP on COCO, and +3.2 mIoU on ADE20K, demonstrating the potential of Transformer-based models as vision backbones.
arXiv Detail & Related papers (2021-03-25T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.