MetaFormer Baselines for Vision
- URL: http://arxiv.org/abs/2210.13452v3
- Date: Sat, 2 Dec 2023 07:46:46 GMT
- Title: MetaFormer Baselines for Vision
- Authors: Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng,
Shuicheng Yan, Xinchao Wang
- Abstract summary: We introduce several baseline models under MetaFormer using the most basic or common mixers.
We find that MetaFormer ensures solid lower bound of performance.
We also find that a new activation, StarReLU, reduces FLOPs of activation compared with GELU yet achieves better performance.
- Score: 173.16644649968393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: MetaFormer, the abstracted architecture of Transformer, has been found to
play a significant role in achieving competitive performance. In this paper, we
further explore the capacity of MetaFormer, again, without focusing on token
mixer design: we introduce several baseline models under MetaFormer using the
most basic or common mixers, and summarize our observations as follows: (1)
MetaFormer ensures solid lower bound of performance. By merely adopting
identity mapping as the token mixer, the MetaFormer model, termed
IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works
well with arbitrary token mixers. When specifying the token mixer as even a
random matrix to mix tokens, the resulting model RandFormer yields an accuracy
of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results
when new token mixers are adopted. (3) MetaFormer effortlessly offers
state-of-the-art results. With just conventional token mixers dated back five
years ago, the models instantiated from MetaFormer already beat state of the
art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable
convolutions as the token mixer, the model termed ConvFormer, which can be
regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer
sets new record on ImageNet-1K. By simply applying depthwise separable
convolutions as token mixer in the bottom stages and vanilla self-attention in
the top stages, the resulting model CAFormer sets a new record on ImageNet-1K:
it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised
training without external data or distillation. In our expedition to probe
MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of
activation compared with GELU yet achieves better performance. We expect
StarReLU to find great potential in MetaFormer-like models alongside other
neural networks.
Related papers
- ParFormer: Vision Transformer Baseline with Parallel Local Global Token Mixer and Convolution Attention Patch Embedding [3.4140488674588614]
ParFormer is an enhanced transformer architecture that allows the incorporation of different token mixers into a single stage.
We offer the Convolutional Attention Patch Embedding (CAPE) as an enhancement of standard patch embedding to improve token mixer extraction.
Our model variants with 11M, 23M, and 34M parameters achieve scores of 80.4%, 82.1%, and 83.1%, respectively.
arXiv Detail & Related papers (2024-03-22T07:32:21Z) - RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer [95.71132572688143]
This paper studies how to keep a vision backbone effective while removing token mixers in its basic building blocks.
Token mixers, as self-attention for vision transformers (ViTs), are intended to perform information communication between different spatial tokens but suffer from considerable computational cost and latency.
arXiv Detail & Related papers (2023-04-12T07:34:13Z) - Centroid-centered Modeling for Efficient Vision Transformer Pre-training [109.18486172045701]
Masked Image Modeling (MIM) is a new self-supervised vision pre-training paradigm using Vision Transformer (ViT)
Our proposed approach, textbfCCViT, leverages k-means clustering to obtain centroids for image modeling without supervised training of tokenizer model.
Experiments show that the ViT-B model with only 300 epochs achieves 84.3% top-1 accuracy on ImageNet-1K classification and 51.6% on ADE20K semantic segmentation.
arXiv Detail & Related papers (2023-03-08T15:34:57Z) - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers [117.79456335844439]
We propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction.
We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches.
Experiments on image classification and semantic segmentation show that our approach outperforms all compared MIM methods.
arXiv Detail & Related papers (2022-08-12T16:48:10Z) - TokenMix: Rethinking Image Mixing for Data Augmentation in Vision
Transformers [36.630476419392046]
CutMix is a popular augmentation technique commonly used for training modern convolutional and transformer vision networks.
We propose a novel data augmentation technique TokenMix to improve the performance of vision transformers.
arXiv Detail & Related papers (2022-07-18T07:08:29Z) - MetaFormer: A Unified Meta Framework for Fine-Grained Recognition [16.058297377539418]
We propose a unified and strong meta-framework for fine-grained visual classification.
In practice, MetaFormer provides a simple yet effective approach to address the joint learning of vision and various meta-information.
In experiments, MetaFormer can effectively use various meta-information to improve the performance of fine-grained recognition.
arXiv Detail & Related papers (2022-03-05T14:12:25Z) - MetaFormer is Actually What You Need for Vision [175.86264904607785]
We replace the attention module in transformers with an embarrassingly simple spatial pooling operator.
Surprisingly, we observe that the derived model achieves competitive performance on multiple computer vision tasks.
arXiv Detail & Related papers (2021-11-22T18:52:03Z) - MetaDelta: A Meta-Learning System for Few-shot Image Classification [71.06324527247423]
We propose MetaDelta, a novel practical meta-learning system for the few-shot image classification.
Each meta-learner in MetaDelta is composed of a unique pretrained encoder fine-tuned by batch training and parameter-free decoder used for prediction.
arXiv Detail & Related papers (2021-02-22T02:57:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.