MixFormer: Mixing Features across Windows and Dimensions
- URL: http://arxiv.org/abs/2204.02557v1
- Date: Wed, 6 Apr 2022 03:13:50 GMT
- Title: MixFormer: Mixing Features across Windows and Dimensions
- Authors: Qiang Chen, Qiman Wu, Jian Wang, Qinghao Hu, Tao Hu, Errui Ding, Jian
Cheng, Jingdong Wang
- Abstract summary: Local-window self-attention performs notably in vision tasks, but suffers from limited receptive field and weak modeling capability issues.
This is mainly because it performs self-attention within non-overlapped windows and shares weights on the channel dimension.
We combine local-window self-attention with depth-wise convolution in a parallel design, modeling cross-window connections to enlarge the receptive fields.
- Score: 68.86393312123168
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While local-window self-attention performs notably in vision tasks, it
suffers from limited receptive field and weak modeling capability issues. This
is mainly because it performs self-attention within non-overlapped windows and
shares weights on the channel dimension. We propose MixFormer to find a
solution. First, we combine local-window self-attention with depth-wise
convolution in a parallel design, modeling cross-window connections to enlarge
the receptive fields. Second, we propose bi-directional interactions across
branches to provide complementary clues in the channel and spatial dimensions.
These two designs are integrated to achieve efficient feature mixing among
windows and dimensions. Our MixFormer provides competitive results on image
classification with EfficientNet and shows better results than RegNet and Swin
Transformer. Performance in downstream tasks outperforms its alternatives by
significant margins with less computational costs in 5 dense prediction tasks
on MS COCO, ADE20k, and LVIS. Code is available at
\url{https://github.com/PaddlePaddle/PaddleClas}.
Related papers
- EfficientVMamba: Atrous Selective Scan for Light Weight Visual Mamba [19.062950348441426]
This work proposes to explore the potential of visual state space models in light-weight model design and introduce a novel efficient model variant dubbed EfficientVMamba.
Our EfficientVMamba integrates a atrous-based selective scan approach by efficient skip sampling, constituting building blocks designed to harness both global and local representational features.
Experimental results show that, EfficientVMamba scales down the computational complexity while yields competitive results across a variety of vision tasks.
arXiv Detail & Related papers (2024-03-15T02:48:47Z) - ScatterFormer: Efficient Voxel Transformer with Scattered Linear Attention [13.36619701679949]
Window-based transformers excel in large-scale point cloud understanding by capturing context-aware representations with affordable attention computation.
Existing methods group the voxels in each window into fixed-length sequences through extensive sorting and padding operations.
We introduce ScatterFormer, which is the first to directly apply attention to voxels across different windows as a single sequence.
arXiv Detail & Related papers (2024-01-01T02:29:59Z) - TransXNet: Learning Both Global and Local Dynamics with a Dual Dynamic
Token Mixer for Visual Recognition [71.6546914957701]
We propose a lightweight Dual Dynamic Token Mixer (D-Mixer) that aggregates global information and local details in an input-dependent way.
We use D-Mixer as the basic building block to design TransXNet, a novel hybrid CNN-Transformer vision backbone network.
In the ImageNet-1K image classification task, TransXNet-T surpasses Swin-T by 0.3% in top-1 accuracy while requiring less than half of the computational cost.
arXiv Detail & Related papers (2023-10-30T09:35:56Z) - DilateFormer: Multi-Scale Dilated Transformer for Visual Recognition [62.95223898214866]
We explore effective Vision Transformers to pursue a preferable trade-off between the computational complexity and size of the attended receptive field.
With a pyramid architecture, we construct a Multi-Scale Dilated Transformer (DilateFormer) by stacking MSDA blocks at low-level stages and global multi-head self-attention blocks at high-level stages.
Our experiment results show that our DilateFormer achieves state-of-the-art performance on various vision tasks.
arXiv Detail & Related papers (2023-02-03T14:59:31Z) - EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for
Mobile Vision Applications [68.35683849098105]
We introduce split depth-wise transpose attention (SDTA) encoder that splits input tensors into multiple channel groups.
Our EdgeNeXt model with 1.3M parameters achieves 71.2% top-1 accuracy on ImageNet-1K.
Our EdgeNeXt model with 5.6M parameters achieves 79.4% top-1 accuracy on ImageNet-1K.
arXiv Detail & Related papers (2022-06-21T17:59:56Z) - Pruning Self-attentions into Convolutional Layers in Single Path [89.55361659622305]
Vision Transformers (ViTs) have achieved impressive performance over various computer vision tasks.
We propose Single-Path Vision Transformer pruning (SPViT) to efficiently and automatically compress the pre-trained ViTs.
Our SPViT can trim 52.0% FLOPs for DeiT-B and get an impressive 0.6% top-1 accuracy gain simultaneously.
arXiv Detail & Related papers (2021-11-23T11:35:54Z) - CSWin Transformer: A General Vision Transformer Backbone with
Cross-Shaped Windows [99.36226415086243]
We present CSWin Transformer, an efficient and effective Transformer-based backbone for general-purpose vision tasks.
A challenging issue in Transformer design is that global self-attention is very expensive to compute whereas local self-attention often limits the field of interactions of each token.
arXiv Detail & Related papers (2021-07-01T17:59:56Z) - RPVNet: A Deep and Efficient Range-Point-Voxel Fusion Network for LiDAR
Point Cloud Segmentation [28.494690309193068]
We propose a novel range-point-voxel fusion network, namely RPVNet.
In this network, we devise a deep fusion framework with multiple and mutual information interactions among these three views.
By leveraging this efficient interaction and relatively lower voxel resolution, our method is also proved to be more efficient.
arXiv Detail & Related papers (2021-03-24T04:24:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.