AS-MLP: An Axial Shifted MLP Architecture for Vision
- URL: http://arxiv.org/abs/2107.08391v1
- Date: Sun, 18 Jul 2021 08:56:34 GMT
- Title: AS-MLP: An Axial Shifted MLP Architecture for Vision
- Authors: Dongze Lian, Zehao Yu, Xing Sun, Shenghua Gao
- Abstract summary: An Axial Shifted architecture (AS-MLP) is proposed in this paper.
By axially shifting channels of the feature map, AS-MLP is able to obtain the information flow from different directions.
With the proposed AS-MLP architecture, our model obtains 83.3% Top-1 accuracy with 88M parameters and 15.2 GFLOPs on the ImageNet-1K dataset.
- Score: 50.11765148947432
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: An Axial Shifted MLP architecture (AS-MLP) is proposed in this paper.
Different from MLP-Mixer, where the global spatial feature is encoded for the
information flow through matrix transposition and one token-mixing MLP, we pay
more attention to the local features communication. By axially shifting
channels of the feature map, AS-MLP is able to obtain the information flow from
different axial directions, which captures the local dependencies. Such an
operation enables us to utilize a pure MLP architecture to achieve the same
local receptive field as CNN-like architecture. We can also design the
receptive field size and dilation of blocks of AS-MLP, etc, just like designing
those of convolution kernels. With the proposed AS-MLP architecture, our model
obtains 83.3% Top-1 accuracy with 88M parameters and 15.2 GFLOPs on the
ImageNet-1K dataset. Such a simple yet effective architecture outperforms all
MLP-based architectures and achieves competitive performance compared to the
transformer-based architectures (e.g., Swin Transformer) even with slightly
lower FLOPs. In addition, AS-MLP is also the first MLP-based architecture to be
applied to the downstream tasks (e.g., object detection and semantic
segmentation). The experimental results are also impressive. Our proposed
AS-MLP obtains 51.5 mAP on the COCO validation set and 49.5 MS mIoU on the
ADE20K dataset, which is competitive compared to the transformer-based
architectures. Code is available at https://github.com/svip-lab/AS-MLP.
Related papers
- MDMLP: Image Classification from Scratch on Small Datasets with MLP [7.672827879118106]
Recently, the attention mechanism has become a go-to technique for natural language processing and computer vision tasks.
Recently, theMixer and other-based architectures, based simply on multi-layer perceptrons (MLPs), are also powerful compared to CNNs and attention techniques.
arXiv Detail & Related papers (2022-05-28T16:26:59Z) - Mixing and Shifting: Exploiting Global and Local Dependencies in Vision
MLPs [84.3235981545673]
Token-mixing multi-layer perceptron (MLP) models have shown competitive performance in computer vision tasks.
We present Mix-Shift-MLP which makes the size of the local receptive field used for mixing increase with respect to the amount of spatial shifting.
MS-MLP achieves competitive performance in multiple vision benchmarks.
arXiv Detail & Related papers (2022-02-14T06:53:48Z) - RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality [113.1414517605892]
We propose a methodology, Locality Injection, to incorporate local priors into an FC layer.
RepMLPNet is the first that seamlessly transfer to Cityscapes semantic segmentation.
arXiv Detail & Related papers (2021-12-21T10:28:17Z) - Sparse MLP for Image Recognition: Is Self-Attention Really Necessary? [65.37917850059017]
We build an attention-free network called sMLPNet.
For 2D image tokens, sMLP applies 1D along the axial directions and the parameters are shared among rows or columns.
When scaling up to 66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the state-of-the-art Swin Transformer.
arXiv Detail & Related papers (2021-09-12T04:05:15Z) - ConvMLP: Hierarchical Convolutional MLPs for Vision [7.874749885641495]
We propose a hierarchical ConMLP: a light-weight, stage-wise, co-design for visual recognition.
We show that ConvMLP can be seamlessly transferred and achieve competitive results with fewer parameters.
arXiv Detail & Related papers (2021-09-09T17:52:57Z) - Hire-MLP: Vision MLP via Hierarchical Rearrangement [58.33383667626998]
Hire-MLP is a simple yet competitive vision architecture via rearrangement.
The proposed Hire-MLP architecture is built with simple channel-mixing operations, thus enjoys high flexibility and inference speed.
Experiments show that our Hire-MLP achieves state-of-the-art performance on the ImageNet-1K benchmark.
arXiv Detail & Related papers (2021-08-30T16:11:04Z) - CycleMLP: A MLP-like Architecture for Dense Prediction [26.74203747156439]
CycleMLP is a versatile backbone for visual recognition and dense predictions.
It can cope with various image sizes and achieves linear computational complexity to image size by using local windows.
CycleMLP aims to provide a competitive baseline on object detection, instance segmentation, and semantic segmentation for models.
arXiv Detail & Related papers (2021-07-21T17:23:06Z) - S$^2$-MLP: Spatial-Shift MLP Architecture for Vision [34.47616917228978]
Recently, visual Transformer (ViT) and its following works abandon the convolution and exploit the self-attention operation.
In this paper, we propose a novel pure architecture, spatial-shift (S$2$-MLP)
arXiv Detail & Related papers (2021-06-14T15:05:11Z) - MLP-Mixer: An all-MLP Architecture for Vision [93.16118698071993]
We present-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
Mixer attains competitive scores on image classification benchmarks, with pre-training and inference comparable to state-of-the-art models.
arXiv Detail & Related papers (2021-05-04T16:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.