Convolutional Embedding Makes Hierarchical Vision Transformer Stronger
- URL: http://arxiv.org/abs/2207.13317v1
- Date: Wed, 27 Jul 2022 06:36:36 GMT
- Title: Convolutional Embedding Makes Hierarchical Vision Transformer Stronger
- Authors: Cong Wang, Hongmin Xu, Xiong Zhang, Li Wang, Zhitong Zheng, and
Haifeng Liu
- Abstract summary: Vision Transformers (ViTs) have recently dominated a range of computer vision tasks, yet it suffers from low training data efficiency and inferior local semantic representation capability without appropriate inductive bias.
CNNs inherently capture regional-aware semantics, inspiring researchers to introduce CNNs back into the architecture of the ViTs to provide desirable inductive bias for ViTs.
In this paper, we explore how the macro architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical ViTs.
- Score: 16.72943631060293
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Vision Transformers (ViTs) have recently dominated a range of computer vision
tasks, yet it suffers from low training data efficiency and inferior local
semantic representation capability without appropriate inductive bias.
Convolutional neural networks (CNNs) inherently capture regional-aware
semantics, inspiring researchers to introduce CNNs back into the architecture
of the ViTs to provide desirable inductive bias for ViTs. However, is the
locality achieved by the micro-level CNNs embedded in ViTs good enough? In this
paper, we investigate the problem by profoundly exploring how the macro
architecture of the hybrid CNNs/ViTs enhances the performances of hierarchical
ViTs. Particularly, we study the role of token embedding layers, alias
convolutional embedding (CE), and systemically reveal how CE injects desirable
inductive bias in ViTs. Besides, we apply the optimal CE configuration to 4
recently released state-of-the-art ViTs, effectively boosting the corresponding
performances. Finally, a family of efficient hybrid CNNs/ViTs, dubbed CETNets,
are released, which may serve as generic vision backbones. Specifically,
CETNets achieve 84.9% Top-1 accuracy on ImageNet-1K (training from scratch),
48.6% box mAP on the COCO benchmark, and 51.6% mIoU on the ADE20K,
substantially improving the performances of the corresponding state-of-the-art
baselines.
Related papers
- RepNeXt: A Fast Multi-Scale CNN using Structural Reparameterization [8.346566205092433]
lightweight Convolutional Neural Networks (CNNs) and Vision Transformers (ViTs) are favored for their parameter efficiency and low latency.
This study investigates the complementary advantages of CNNs and ViTs to develop a versatile vision backbone tailored for resource-constrained applications.
arXiv Detail & Related papers (2024-06-23T04:11:12Z) - Structured Initialization for Attention in Vision Transformers [34.374054040300805]
convolutional neural networks (CNNs) have an architectural inductive bias enabling them to perform well on small-scale problems.
We argue that the architectural bias inherent to CNNs can be reinterpreted as an initialization bias within ViT.
This insight is significant as it empowers ViTs to perform equally well on small-scale problems while maintaining their flexibility for large-scale applications.
arXiv Detail & Related papers (2024-04-01T14:34:47Z) - OA-CNNs: Omni-Adaptive Sparse CNNs for 3D Semantic Segmentation [70.17681136234202]
We reexamine the design distinctions and test the limits of what a sparse CNN can achieve.
We propose two key components, i.e., adaptive receptive fields (spatially) and adaptive relation, to bridge the gap.
This exploration led to the creation of Omni-Adaptive 3D CNNs (OA-CNNs), a family of networks that integrates a lightweight module.
arXiv Detail & Related papers (2024-03-21T14:06:38Z) - RepViT: Revisiting Mobile CNN From ViT Perspective [67.05569159984691]
lightweight Vision Transformers (ViTs) demonstrate superior performance and lower latency, compared with lightweight Convolutional Neural Networks (CNNs)
In this study, we revisit the efficient design of lightweight CNNs from ViT perspective and emphasize their promising prospect for mobile devices.
arXiv Detail & Related papers (2023-07-18T14:24:33Z) - Next-ViT: Next Generation Vision Transformer for Efficient Deployment in
Realistic Industrial Scenarios [19.94294348122248]
Most vision Transformers (ViTs) can not perform as efficiently as convolutional neural networks (CNNs) in realistic industrial deployment scenarios.
We propose a next generation vision Transformer for efficient deployment in realistic industrial scenarios, namely Next-ViT.
Next-ViT dominates both CNNs and ViTs from the perspective of latency/accuracy trade-off.
arXiv Detail & Related papers (2022-07-12T12:50:34Z) - EdgeViTs: Competing Light-weight CNNs on Mobile Devices with Vision
Transformers [88.52500757894119]
Self-attention based vision transformers (ViTs) have emerged as a very competitive architecture alternative to convolutional neural networks (CNNs) in computer vision.
We introduce EdgeViTs, a new family of light-weight ViTs that, for the first time, enable attention-based vision models to compete with the best light-weight CNNs.
arXiv Detail & Related papers (2022-05-06T18:17:19Z) - Bootstrapping ViTs: Towards Liberating Vision Transformers from
Pre-training [29.20567759071523]
Vision Transformers (ViTs) are developing rapidly and starting to challenge the domination of convolutional neural networks (CNNs) in computer vision.
This paper introduces CNNs' inductive biases back to ViTs while preserving their network architectures for higher upper bound.
Experiments on CIFAR-10/100 and ImageNet-1k with limited training data have shown encouraging results.
arXiv Detail & Related papers (2021-12-07T07:56:50Z) - Self-slimmed Vision Transformer [52.67243496139175]
Vision transformers (ViTs) have become the popular structures and outperformed convolutional neural networks (CNNs) on various vision tasks.
We propose a generic self-slimmed learning approach for vanilla ViTs, namely SiT.
Specifically, we first design a novel Token Slimming Module (TSM), which can boost the inference efficiency of ViTs.
arXiv Detail & Related papers (2021-11-24T16:48:57Z) - VOLO: Vision Outlooker for Visual Recognition [148.12522298731807]
Vision transformers (ViTs) have shown great potential of self-attention based models in ImageNet classification.
We introduce a novel outlook attention and present a simple and general architecture, termed Vision Outlooker (VOLO)
Unlike self-attention that focuses on global dependency modeling at a coarse level, the outlook attention efficiently encodes finer-level features and contexts into tokens.
Experiments show that our VOLO achieves 87.1% top-1 accuracy on ImageNet-1K classification, which is the first model exceeding 87% accuracy on this competitive benchmark.
arXiv Detail & Related papers (2021-06-24T15:46:54Z) - Emerging Properties in Self-Supervised Vision Transformers [57.36837447500544]
We show that self-supervised ViTs provide new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets)
We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels.
We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.
arXiv Detail & Related papers (2021-04-29T12:28:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.