MetaFormer Baselines for Vision
- URL: http://arxiv.org/abs/2210.13452v3
- Date: Sat, 2 Dec 2023 07:46:46 GMT
- Title: MetaFormer Baselines for Vision
- Authors: Weihao Yu, Chenyang Si, Pan Zhou, Mi Luo, Yichen Zhou, Jiashi Feng,
Shuicheng Yan, Xinchao Wang
- Abstract summary: We introduce several baseline models under MetaFormer using the most basic or common mixers.
We find that MetaFormer ensures solid lower bound of performance.
We also find that a new activation, StarReLU, reduces FLOPs of activation compared with GELU yet achieves better performance.
- Score: 173.16644649968393
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: MetaFormer, the abstracted architecture of Transformer, has been found to
play a significant role in achieving competitive performance. In this paper, we
further explore the capacity of MetaFormer, again, without focusing on token
mixer design: we introduce several baseline models under MetaFormer using the
most basic or common mixers, and summarize our observations as follows: (1)
MetaFormer ensures solid lower bound of performance. By merely adopting
identity mapping as the token mixer, the MetaFormer model, termed
IdentityFormer, achieves >80% accuracy on ImageNet-1K. (2) MetaFormer works
well with arbitrary token mixers. When specifying the token mixer as even a
random matrix to mix tokens, the resulting model RandFormer yields an accuracy
of >81%, outperforming IdentityFormer. Rest assured of MetaFormer's results
when new token mixers are adopted. (3) MetaFormer effortlessly offers
state-of-the-art results. With just conventional token mixers dated back five
years ago, the models instantiated from MetaFormer already beat state of the
art. (a) ConvFormer outperforms ConvNeXt. Taking the common depthwise separable
convolutions as the token mixer, the model termed ConvFormer, which can be
regarded as pure CNNs, outperforms the strong CNN model ConvNeXt. (b) CAFormer
sets new record on ImageNet-1K. By simply applying depthwise separable
convolutions as token mixer in the bottom stages and vanilla self-attention in
the top stages, the resulting model CAFormer sets a new record on ImageNet-1K:
it achieves an accuracy of 85.5% at 224x224 resolution, under normal supervised
training without external data or distillation. In our expedition to probe
MetaFormer, we also find that a new activation, StarReLU, reduces 71% FLOPs of
activation compared with GELU yet achieves better performance. We expect
StarReLU to find great potential in MetaFormer-like models alongside other
neural networks.
Related papers
- Neural Metamorphosis [72.88137795439407]
This paper introduces a new learning paradigm termed Neural Metamorphosis (NeuMeta), which aims to build self-morphable neural networks.
NeuMeta directly learns the continuous weight manifold of neural networks.
It sustains full-size performance even at a 75% compression rate.
arXiv Detail & Related papers (2024-10-10T14:49:58Z) - RIFormer: Keep Your Vision Backbone Effective While Removing Token Mixer [95.71132572688143]
This paper studies how to keep a vision backbone effective while removing token mixers in its basic building blocks.
Token mixers, as self-attention for vision transformers (ViTs), are intended to perform information communication between different spatial tokens but suffer from considerable computational cost and latency.
arXiv Detail & Related papers (2023-04-12T07:34:13Z) - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers [117.79456335844439]
We propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction.
We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches.
Experiments on image classification and semantic segmentation show that our approach outperforms all compared MIM methods.
arXiv Detail & Related papers (2022-08-12T16:48:10Z) - TokenMix: Rethinking Image Mixing for Data Augmentation in Vision
Transformers [36.630476419392046]
CutMix is a popular augmentation technique commonly used for training modern convolutional and transformer vision networks.
We propose a novel data augmentation technique TokenMix to improve the performance of vision transformers.
arXiv Detail & Related papers (2022-07-18T07:08:29Z) - MetaFormer: A Unified Meta Framework for Fine-Grained Recognition [16.058297377539418]
We propose a unified and strong meta-framework for fine-grained visual classification.
In practice, MetaFormer provides a simple yet effective approach to address the joint learning of vision and various meta-information.
In experiments, MetaFormer can effectively use various meta-information to improve the performance of fine-grained recognition.
arXiv Detail & Related papers (2022-03-05T14:12:25Z) - MetaFormer is Actually What You Need for Vision [175.86264904607785]
We replace the attention module in transformers with an embarrassingly simple spatial pooling operator.
Surprisingly, we observe that the derived model achieves competitive performance on multiple computer vision tasks.
arXiv Detail & Related papers (2021-11-22T18:52:03Z) - MetaDelta: A Meta-Learning System for Few-shot Image Classification [71.06324527247423]
We propose MetaDelta, a novel practical meta-learning system for the few-shot image classification.
Each meta-learner in MetaDelta is composed of a unique pretrained encoder fine-tuned by batch training and parameter-free decoder used for prediction.
arXiv Detail & Related papers (2021-02-22T02:57:22Z) - MetaMix: Improved Meta-Learning with Interpolation-based Consistency
Regularization [14.531741503372764]
We propose an approach called MetaMix to regularize backbone models.
It generates virtual feature-target pairs within each episode to regularize the backbone models.
It can be integrated with any of the MAML-based algorithms and learn the decision boundaries generalizing better to new tasks.
arXiv Detail & Related papers (2020-09-29T02:44:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.