RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?
- URL: http://arxiv.org/abs/2108.04384v1
- Date: Mon, 9 Aug 2021 23:55:24 GMT
- Title: RaftMLP: Do MLP-based Models Dream of Winning Over Computer Vision?
- Authors: Yuki Tatsunami and Masato Taki
- Abstract summary: CNN has reigned supreme in the world of computer vision for the past ten years, but recently, Transformer is on the rise.
In particular, our work indicates that models have the potential to replace CNNs by adopting inductive bias.
The proposed model, named RaftMLP, has a good balance of computational complexity, the number of parameters, and actual memory usage.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: For the past ten years, CNN has reigned supreme in the world of computer
vision, but recently, Transformer is on the rise. However, the quadratic
computational cost of self-attention has become a severe problem of practice.
There has been much research on architectures without CNN and self-attention in
this context. In particular, MLP-Mixer is a simple idea designed using MLPs and
hit an accuracy comparable to the Vision Transformer. However, the only
inductive bias in this architecture is the embedding of tokens. Thus, there is
still a possibility to build a non-convolutional inductive bias into the
architecture itself, and we built in an inductive bias using two simple ideas.
A way is to divide the token-mixing block vertically and horizontally. Another
way is to make spatial correlations denser among some channels of token-mixing.
With this approach, we were able to improve the accuracy of the MLP-Mixer while
reducing its parameters and computational complexity. Compared to other
MLP-based models, the proposed model, named RaftMLP has a good balance of
computational complexity, the number of parameters, and actual memory usage. In
addition, our work indicates that MLP-based models have the potential to
replace CNNs by adopting inductive bias. The source code in PyTorch version is
available at \url{https://github.com/okojoalg/raft-mlp}.
Related papers
- MLP Can Be A Good Transformer Learner [73.01739251050076]
Self-attention mechanism is the key of the Transformer but often criticized for its computation demands.
This paper introduces a novel strategy that simplifies vision transformers and reduces computational load through the selective removal of non-essential attention layers.
arXiv Detail & Related papers (2024-04-08T16:40:15Z) - Parameterization of Cross-Token Relations with Relative Positional
Encoding for Vision MLP [52.25478388220691]
Vision multi-layer perceptrons (MLPs) have shown promising performance in computer vision tasks.
They use token-mixing layers to capture cross-token interactions, as opposed to the multi-head self-attention mechanism used by Transformers.
We propose a new positional spacial gating unit (PoSGU) to efficiently encode the cross-token relations for token mixing.
arXiv Detail & Related papers (2022-07-15T04:18:06Z) - MLP-3D: A MLP-like 3D Architecture with Grouped Time Mixing [123.43419144051703]
We present a novel-like 3D architecture for video recognition.
The results are comparable to state-of-the-art widely-used 3D CNNs and video.
arXiv Detail & Related papers (2022-06-13T16:21:33Z) - Sparse MLP for Image Recognition: Is Self-Attention Really Necessary? [65.37917850059017]
We build an attention-free network called sMLPNet.
For 2D image tokens, sMLP applies 1D along the axial directions and the parameters are shared among rows or columns.
When scaling up to 66M parameters, sMLPNet achieves 83.4% top-1 accuracy, which is on par with the state-of-the-art Swin Transformer.
arXiv Detail & Related papers (2021-09-12T04:05:15Z) - CycleMLP: A MLP-like Architecture for Dense Prediction [26.74203747156439]
CycleMLP is a versatile backbone for visual recognition and dense predictions.
It can cope with various image sizes and achieves linear computational complexity to image size by using local windows.
CycleMLP aims to provide a competitive baseline on object detection, instance segmentation, and semantic segmentation for models.
arXiv Detail & Related papers (2021-07-21T17:23:06Z) - AS-MLP: An Axial Shifted MLP Architecture for Vision [50.11765148947432]
An Axial Shifted architecture (AS-MLP) is proposed in this paper.
By axially shifting channels of the feature map, AS-MLP is able to obtain the information flow from different directions.
With the proposed AS-MLP architecture, our model obtains 83.3% Top-1 accuracy with 88M parameters and 15.2 GFLOPs on the ImageNet-1K dataset.
arXiv Detail & Related papers (2021-07-18T08:56:34Z) - Rethinking Token-Mixing MLP for MLP-based Vision Backbone [34.47616917228978]
We propose an improved structure as termed Circulant Channel-Specific (CCS) token-mixing benchmark, which is spatial-invariant and channel-specific.
It takes fewer parameters but achieves higher classification accuracy on ImageNet1K.
arXiv Detail & Related papers (2021-06-28T17:59:57Z) - S$^2$-MLP: Spatial-Shift MLP Architecture for Vision [34.47616917228978]
Recently, visual Transformer (ViT) and its following works abandon the convolution and exploit the self-attention operation.
In this paper, we propose a novel pure architecture, spatial-shift (S$2$-MLP)
arXiv Detail & Related papers (2021-06-14T15:05:11Z) - MLP-Mixer: An all-MLP Architecture for Vision [93.16118698071993]
We present-Mixer, an architecture based exclusively on multi-layer perceptrons (MLPs).
Mixer attains competitive scores on image classification benchmarks, with pre-training and inference comparable to state-of-the-art models.
arXiv Detail & Related papers (2021-05-04T16:17:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.