Scale-Aware Modulation Meet Transformer
- URL: http://arxiv.org/abs/2307.08579v2
- Date: Wed, 26 Jul 2023 07:44:32 GMT
- Title: Scale-Aware Modulation Meet Transformer
- Authors: Weifeng Lin, Ziheng Wu, Jiayu Chen, Jun Huang, Lianwen Jin
- Abstract summary: This paper presents a new vision Transformer, Scale-Aware Modulation Transformer (SMT)
SMT can handle various downstream tasks efficiently by combining the convolutional network and vision Transformer.
- Score: 28.414901658729107
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper presents a new vision Transformer, Scale-Aware Modulation
Transformer (SMT), that can handle various downstream tasks efficiently by
combining the convolutional network and vision Transformer. The proposed
Scale-Aware Modulation (SAM) in the SMT includes two primary novel designs.
Firstly, we introduce the Multi-Head Mixed Convolution (MHMC) module, which can
capture multi-scale features and expand the receptive field. Secondly, we
propose the Scale-Aware Aggregation (SAA) module, which is lightweight but
effective, enabling information fusion across different heads. By leveraging
these two modules, convolutional modulation is further enhanced. Furthermore,
in contrast to prior works that utilized modulations throughout all stages to
build an attention-free network, we propose an Evolutionary Hybrid Network
(EHN), which can effectively simulate the shift from capturing local to global
dependencies as the network becomes deeper, resulting in superior performance.
Extensive experiments demonstrate that SMT significantly outperforms existing
state-of-the-art models across a wide range of visual tasks. Specifically, SMT
with 11.5M / 2.4GFLOPs and 32M / 7.7GFLOPs can achieve 82.2% and 84.3% top-1
accuracy on ImageNet-1K, respectively. After pretrained on ImageNet-22K in
224^2 resolution, it attains 87.1% and 88.1% top-1 accuracy when finetuned with
resolution 224^2 and 384^2, respectively. For object detection with Mask R-CNN,
the SMT base trained with 1x and 3x schedule outperforms the Swin Transformer
counterpart by 4.2 and 1.3 mAP on COCO, respectively. For semantic segmentation
with UPerNet, the SMT base test at single- and multi-scale surpasses Swin by
2.0 and 1.1 mIoU respectively on the ADE20K.
Related papers
- GroupMamba: Parameter-Efficient and Accurate Group Visual State Space Model [66.35608254724566]
State-space models (SSMs) have showcased effective performance in modeling long-range dependencies with subquadratic complexity.
However, pure SSM-based models still face challenges related to stability and achieving optimal performance on computer vision tasks.
Our paper addresses the challenges of scaling SSM-based models for computer vision, particularly the instability and inefficiency of large model sizes.
arXiv Detail & Related papers (2024-07-18T17:59:58Z) - Mixture-of-Modules: Reinventing Transformers as Dynamic Assemblies of Modules [96.21649779507831]
We propose a novel architecture dubbed mixture-of-modules (MoM)
MoM is motivated by an intuition that any layer, regardless of its position, can be used to compute a token.
We show that MoM provides not only a unified framework for Transformers but also a flexible and learnable approach for reducing redundancy.
arXiv Detail & Related papers (2024-07-09T08:50:18Z) - Reduced Precision Floating-Point Optimization for Deep Neural Network
On-Device Learning on MicroControllers [15.37318446043671]
This paper introduces a novel reduced precision optimization technique for On-Device Learning (ODL) primitives on MCU-class devices.
Our approach results more than two orders of magnitude faster than existing ODL software frameworks for single-core MCUs.
arXiv Detail & Related papers (2023-05-30T16:14:16Z) - EATFormer: Improving Vision Transformer Inspired by Evolutionary Algorithm [111.17100512647619]
This paper explains the rationality of Vision Transformer by analogy with the proven practical evolutionary algorithm (EA)
We propose a novel pyramid EATFormer backbone that only contains the proposed EA-based transformer (EAT) block.
Massive quantitative and quantitative experiments on image classification, downstream tasks, and explanatory experiments demonstrate the effectiveness and superiority of our approach.
arXiv Detail & Related papers (2022-06-19T04:49:35Z) - A Battle of Network Structures: An Empirical Study of CNN, Transformer,
and MLP [121.35904748477421]
Convolutional neural networks (CNN) are the dominant deep neural network (DNN) architecture for computer vision.
Transformer and multi-layer perceptron (MLP)-based models, such as Vision Transformer and Vision-Mixer, started to lead new trends.
In this paper, we conduct empirical studies on these DNN structures and try to understand their respective pros and cons.
arXiv Detail & Related papers (2021-08-30T06:09:02Z) - CMT: Convolutional Neural Networks Meet Vision Transformers [68.10025999594883]
Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image.
There are still gaps in both performance and computational cost between transformers and existing convolutional neural networks (CNNs)
We propose a new transformer based hybrid network by taking advantage of transformers to capture long-range dependencies, and of CNNs to model local features.
In particular, our CMT-S achieves 83.5% top-1 accuracy on ImageNet, while being 14x and 2x smaller on FLOPs than the existing DeiT and EfficientNet, respectively.
arXiv Detail & Related papers (2021-07-13T17:47:19Z) - Simplified Self-Attention for Transformer-based End-to-End Speech
Recognition [56.818507476125895]
We propose a simplified self-attention (SSAN) layer which employs FSMN memory block instead of projection layers to form query and key vectors.
We evaluate the SSAN-based and the conventional SAN-based transformers on the public AISHELL-1, internal 1000-hour and 20,000-hour large-scale Mandarin tasks.
arXiv Detail & Related papers (2020-05-21T04:55:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.