MetaFormer is Actually What You Need for Vision
- URL: http://arxiv.org/abs/2111.11418v1
- Date: Mon, 22 Nov 2021 18:52:03 GMT
- Title: MetaFormer is Actually What You Need for Vision
- Authors: Weihao Yu, Mi Luo, Pan Zhou, Chenyang Si, Yichen Zhou, Xinchao Wang,
Jiashi Feng, Shuicheng Yan
- Abstract summary: We replace the attention module in transformers with an embarrassingly simple spatial pooling operator.
Surprisingly, we observe that the derived model achieves competitive performance on multiple computer vision tasks.
- Score: 175.86264904607785
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Transformers have shown great potential in computer vision tasks. A common
belief is their attention-based token mixer module contributes most to their
competence. However, recent works show the attention-based module in
transformers can be replaced by spatial MLPs and the resulted models still
perform quite well. Based on this observation, we hypothesize that the general
architecture of the transformers, instead of the specific token mixer module,
is more essential to the model's performance. To verify this, we deliberately
replace the attention module in transformers with an embarrassingly simple
spatial pooling operator to conduct only the most basic token mixing.
Surprisingly, we observe that the derived model, termed as PoolFormer, achieves
competitive performance on multiple computer vision tasks. For example, on
ImageNet-1K, PoolFormer achieves 82.1% top-1 accuracy, surpassing well-tuned
vision transformer/MLP-like baselines DeiT-B/ResMLP-B24 by 0.3%/1.1% accuracy
with 35%/52% fewer parameters and 48%/60% fewer MACs. The effectiveness of
PoolFormer verifies our hypothesis and urges us to initiate the concept of
"MetaFormer", a general architecture abstracted from transformers without
specifying the token mixer. Based on the extensive experiments, we argue that
MetaFormer is the key player in achieving superior results for recent
transformer and MLP-like models on vision tasks. This work calls for more
future research dedicated to improving MetaFormer instead of focusing on the
token mixer modules. Additionally, our proposed PoolFormer could serve as a
starting baseline for future MetaFormer architecture design. Code is available
at https://github.com/sail-sg/poolformer
Related papers
- Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules [96.21649779507831]
We propose a novel architecture dubbed mixture-of-modules (MoM)
MoM is motivated by an intuition that any layer, regardless of its position, can be used to compute a token.
We show that MoM provides not only a unified framework for Transformers but also a flexible and learnable approach for reducing redundancy.
arXiv Detail & Related papers (2024-07-09T08:50:18Z) - MetaFormer Baselines for Vision [173.16644649968393]
We introduce several baseline models under MetaFormer using the most basic or common mixers.
We find that MetaFormer ensures solid lower bound of performance.
We also find that a new activation, StarReLU, reduces FLOPs of activation compared with GELU yet achieves better performance.
arXiv Detail & Related papers (2022-10-24T17:59:57Z) - Architecture-Agnostic Masked Image Modeling -- From ViT back to CNN [38.87225202482656]
Masked image modeling, an emerging self-supervised pre-training method, has shown impressive success across numerous downstream vision tasks with Vision transformers.
We propose an Architecture-Agnostic Masked Image Modeling framework (A$2$MIM), which is compatible with both Transformers and CNNs in a unified way.
arXiv Detail & Related papers (2022-05-27T12:42:02Z) - Sparse MLP for Image Recognition: Is Self-Attention Really Necessary? [65.37917850059017]
We build an attention-free network called sMLPNet.
For 2D image tokens, sMLP applies 1D along the axial directions and the parameters are shared among rows or columns.
When scaling up to 66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the state-of-the-art Swin Transformer.
arXiv Detail & Related papers (2021-09-12T04:05:15Z) - A Battle of Network Structures: An Empirical Study of CNN, Transformer,
and MLP [121.35904748477421]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision.
Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and Vision-Mixer, started to lead new trends.
In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons.
arXiv Detail & Related papers (2021-08-30T06:09:02Z) - Self-Supervised Learning with Swin Transformers [24.956637957269926]
We present a self-supervised learning approach called MoBY, with Vision Transformers as its backbone architecture.
The approach basically has no new inventions, which is combined from MoCo v2 and BYOL.
The performance is slightly better than recent works of MoCo v3 and DINO which adopt DeiT as the backbone, but with much lighter tricks.
arXiv Detail & Related papers (2021-05-10T17:59:45Z) - Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with
56M Parameters on ImageNet [86.95679590801494]
We explore the potential of vision transformers in ImageNet classification by developing a bag of training techniques.
We show that by slightly tuning the structure of vision transformers and introducing token labeling, our models are able to achieve better results than the CNN counterparts.
arXiv Detail & Related papers (2021-04-22T04:43:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.