Escaping the Big Data Paradigm with Compact Transformers
- URL: http://arxiv.org/abs/2104.05704v1
- Date: Mon, 12 Apr 2021 17:58:56 GMT
- Title: Escaping the Big Data Paradigm with Compact Transformers
- Authors: Ali Hassani, Steven Walton, Nikhil Shah, Abulikemu Abuduweili, Jiachen
Li, Humphrey Shi
- Abstract summary: We show for the first time that with the right size and tokenization, transformers can perform head-to-head with state-of-the-art CNNs on small datasets.
Our method is flexible in terms of model size, and can have as little as 0.28M parameters and achieve reasonable results.
- Score: 7.697698018200631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rise of Transformers as the standard for language processing, and
their advancements in computer vision, along with their unprecedented size and
amounts of training data, many have come to believe that they are not suitable
for small sets of data. This trend leads to great concerns, including but not
limited to: limited availability of data in certain scientific domains and the
exclusion of those with limited resource from research in the field. In this
paper, we dispel the myth that transformers are "data hungry" and therefore can
only be applied to large sets of data. We show for the first time that with the
right size and tokenization, transformers can perform head-to-head with
state-of-the-art CNNs on small datasets. Our model eliminates the requirement
for class token and positional embeddings through a novel sequence pooling
strategy and the use of convolutions. We show that compared to CNNs, our
compact transformers have fewer parameters and MACs, while obtaining similar
accuracies. Our method is flexible in terms of model size, and can have as
little as 0.28M parameters and achieve reasonable results. It can reach an
accuracy of 94.72% when training from scratch on CIFAR-10, which is comparable
with modern CNN based approaches, and a significant improvement over previous
Transformer based models. Our simple and compact design democratizes
transformers by making them accessible to those equipped with basic computing
resources and/or dealing with important small datasets. Our code and
pre-trained models will be made publicly available at
https://github.com/SHI-Labs/Compact-Transformers.
Related papers
- Emergent Agentic Transformer from Chain of Hindsight Experience [96.56164427726203]
We show that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
This is the first time that a simple transformer-based model performs competitively with both temporal-difference and imitation-learning-based approaches.
arXiv Detail & Related papers (2023-05-26T00:43:02Z) - Masked autoencoders are effective solution to transformer data-hungry [0.0]
Vision Transformers (ViTs) outperforms convolutional neural networks (CNNs) in several vision tasks with its global modeling capabilities.
ViT lacks the inductive bias inherent to convolution making it require a large amount of data for training.
Masked autoencoders (MAE) can make the transformer focus more on the image itself.
arXiv Detail & Related papers (2022-12-12T03:15:19Z) - Explicitly Increasing Input Information Density for Vision Transformers
on Small Datasets [26.257612622358614]
Vision Transformers have attracted a lot of attention recently since the successful implementation of Vision Transformer (ViT) on vision tasks.
This paper proposes to explicitly increase the input information density in the frequency domain.
Experiments demonstrate the effectiveness of the proposed approach on five small-scale datasets.
arXiv Detail & Related papers (2022-10-25T20:24:53Z) - How to Train Vision Transformer on Small-scale Datasets? [4.56717163175988]
In contrast to convolutional neural networks, Vision Transformer lacks inherent inductive biases.
We show that self-supervised inductive biases can be learned directly from small-scale datasets.
This allows to train these models without large-scale pre-training, changes to model architecture or loss functions.
arXiv Detail & Related papers (2022-10-13T17:59:19Z) - Towards Data-Efficient Detection Transformers [77.43470797296906]
We show most detection transformers suffer from significant performance drops on small-size datasets.
We empirically analyze the factors that affect data efficiency, through a step-by-step transition from a data-efficient RCNN variant to the representative DETR.
We introduce a simple yet effective label augmentation method to provide richer supervision and improve data efficiency.
arXiv Detail & Related papers (2022-03-17T17:56:34Z) - Investigating Transfer Learning Capabilities of Vision Transformers and
CNNs by Fine-Tuning a Single Trainable Block [0.0]
transformer-based architectures are surpassing the state-of-the-art set by CNN architectures in accuracy but are computationally very expensive to train from scratch.
We study it's transfer learning capabilities and compare it with CNNs so that we can understand which architecture is better when applied to real world problems with small data.
We find out that transformers-based architectures not only achieve higher accuracy than CNNs but some transformers even achieve this feat with around 4 times lesser number of parameters.
arXiv Detail & Related papers (2021-10-11T13:43:03Z) - CMT: Convolutional Neural Networks Meet Vision Transformers [68.10025999594883]
Vision transformers have been successfully applied to image recognition tasks due to their ability to capture long-range dependencies within an image.
There are still gaps in both performance and computational cost between transformers and existing convolutional neural networks (CNNs)
We propose a new transformer based hybrid network by taking advantage of transformers to capture long-range dependencies, and of CNNs to model local features.
In particular, our CMT-S achieves 83.5% top-1 accuracy on ImageNet, while being 14x and 2x smaller on FLOPs than the existing DeiT and EfficientNet, respectively.
arXiv Detail & Related papers (2021-07-13T17:47:19Z) - Visformer: The Vision-friendly Transformer [105.52122194322592]
We propose a new architecture named Visformer, which is abbreviated from the Vision-friendly Transformer'
With the same computational complexity, Visformer outperforms both the Transformer-based and convolution-based models in terms of ImageNet classification accuracy.
arXiv Detail & Related papers (2021-04-26T13:13:03Z) - Token Labeling: Training a 85.4% Top-1 Accuracy Vision Transformer with
56M Parameters on ImageNet [86.95679590801494]
We explore the potential of vision transformers in ImageNet classification by developing a bag of training techniques.
We show that by slightly tuning the structure of vision transformers and introducing token labeling, our models are able to achieve better results than the CNN counterparts.
arXiv Detail & Related papers (2021-04-22T04:43:06Z) - Tiny Transformers for Environmental Sound Classification at the Edge [0.6193838300896449]
This work presents training techniques for audio models in the field of environmental sound classification at the edge.
Specifically, we design and train Transformers to classify office sounds in audio clips.
Results show that a BERT-based Transformer, trained on Mel spectrograms, can outperform a CNN using 99.85% fewer parameters.
arXiv Detail & Related papers (2021-03-22T20:12:15Z) - Train Large, Then Compress: Rethinking Model Size for Efficient Training
and Inference of Transformers [94.43313684188819]
We study the impact of model size in this setting, focusing on Transformer models for NLP tasks that are limited by compute.
We first show that even though smaller Transformer models execute faster per iteration, wider and deeper models converge in significantly fewer steps.
This leads to an apparent trade-off between the training efficiency of large Transformer models and the inference efficiency of small Transformer models.
arXiv Detail & Related papers (2020-02-26T21:17:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.