Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?
- URL: http://arxiv.org/abs/2109.05422v1
- Date: Sun, 12 Sep 2021 04:05:15 GMT
- Title: Sparse MLP for Image Recognition: Is Self-Attention Really Necessary?
- Authors: Chuanxin Tang, Yucheng Zhao, Guangting Wang, Chong Luo, Wenxuan Xie
and Wenjun Zeng
- Abstract summary: We build an attention-free network called sMLPNet.
For 2D image tokens, sMLP applies 1D along the axial directions and the parameters are shared among rows or columns.
When scaling up to 66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the state-of-the-art Swin Transformer.
- Score: 65.37917850059017
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Transformers have sprung up in the field of computer vision. In this work, we
explore whether the core self-attention module in Transformer is the key to
achieving excellent performance in image recognition. To this end, we build an
attention-free network called sMLPNet based on the existing MLP-based vision
models. Specifically, we replace the MLP module in the token-mixing step with a
novel sparse MLP (sMLP) module. For 2D image tokens, sMLP applies 1D MLP along
the axial directions and the parameters are shared among rows or columns. By
sparse connection and weight sharing, sMLP module significantly reduces the
number of model parameters and computational complexity, avoiding the common
over-fitting problem that plagues the performance of MLP-like models. When only
trained on the ImageNet-1K dataset, the proposed sMLPNet achieves 81.9% top-1
accuracy with only 24M parameters, which is much better than most CNNs and
vision Transformers under the same model size constraint. When scaling up to
66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the
state-of-the-art Swin Transformer. The success of sMLPNet suggests that the
self-attention mechanism is not necessarily a silver bullet in computer vision.
Code will be made publicly available.
Related papers
- MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing [123.43419144051703]
We present a novel-like 3D architecture for video recognition.
The results are comparable to state-of-the-art widely-used 3D CNNs and video.
arXiv Detail & Related papers (2022-06-13T16:21:33Z) - MDMLP: Image Classification from Scratch on Small Datasets with MLP [7.672827879118106]
Recently, the attention mechanism has become a go-to technique for natural language processing and computer vision tasks.
Recently, theMixer and other-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques.
arXiv Detail & Related papers (2022-05-28T16:26:59Z) - Efficient Language Modeling with Sparse all-MLP [53.81435968051093]
All-MLPs can match Transformers in language modeling, but still lag behind in downstream tasks.
We propose sparse all-MLPs with mixture-of-experts (MoEs) in both feature and input (tokens)
We evaluate its zero-shot in-context learning performance on six downstream tasks, and find that it surpasses Transformer-based MoEs and dense Transformers.
arXiv Detail & Related papers (2022-03-14T04:32:19Z) - Mixing and Shifting: Exploiting Global and Local Dependencies in Vision
MLPs [84.3235981545673]
Token-mixing multi-layer perceptron (MLP) models have shown competitive performance in computer vision tasks.
We present Mix-Shift-MLP which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting.
MS-MLP achieves competitive performance in multiple vision benchmarks.
arXiv Detail & Related papers (2022-02-14T06:53:48Z) - RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality [113.1414517605892]
We propose a methodology, Locality Injection, to incorporate local priors into an FC layer.
RepMLPNet is the first that seamlessly transfer to Cityscapes semantic segmentation.
arXiv Detail & Related papers (2021-12-21T10:28:17Z) - ConvMLP: Hierarchical Convolutional MLPs for Vision [7.874749885641495]
We propose a hierarchical ConMLP: a light-weight, stage-wise, co-design for visual recognition.
We show that ConvMLP can be seamlessly transferred and achieve competitive results with fewer parameters.
arXiv Detail & Related papers (2021-09-09T17:52:57Z) - CycleMLP: A MLP-like Architecture for Dense Prediction [26.74203747156439]
CycleMLP is a versatile backbone for visual recognition and dense predictions.
It can cope with various image sizes and achieves linear computational complexity to image size by using local windows.
CycleMLP aims to provide a competitive baseline on object detection, instance segmentation, and semantic segmentation for models.
arXiv Detail & Related papers (2021-07-21T17:23:06Z) - S$^2$-MLP: Spatial-Shift MLP Architecture for Vision [34.47616917228978]
Recently, visual Transformer (ViT) and its following works abandon the convolution and exploit the self-attention operation.
In this paper, we propose a novel pure architecture, spatial-shift (S$2$-MLP)
arXiv Detail & Related papers (2021-06-14T15:05:11Z) - MLP-Mixer: An all-MLP Architecture for Vision [93.16118698071993]
We present-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
Mixer attains competitive scores on image classification benchmarks, with pre-training and inference comparable to state-of-the-art models.
arXiv Detail & Related papers (2021-05-04T16:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.