Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked
Autoencoders
- URL: http://arxiv.org/abs/2310.20704v2
- Date: Wed, 27 Dec 2023 07:28:57 GMT
- Title: Limited Data, Unlimited Potential: A Study on ViTs Augmented by Masked
Autoencoders
- Authors: Srijan Das, Tanmay Jain, Dominick Reilly, Pranav Balaji, Soumyajit
Karmakar, Shyam Marjit, Xiang Li, Abhijit Das, and Michael S. Ryoo
- Abstract summary: Vision Transformers (ViTs) have become ubiquitous in computer vision.
ViTs lack inductive biases, which can make it difficult to train them with limited data.
We propose a technique that enables ViTs to leverage the unique characteristics of both the self-supervised and primary tasks.
- Score: 32.2455570714414
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Vision Transformers (ViTs) have become ubiquitous in computer vision. Despite
their success, ViTs lack inductive biases, which can make it difficult to train
them with limited data. To address this challenge, prior studies suggest
training ViTs with self-supervised learning (SSL) and fine-tuning sequentially.
However, we observe that jointly optimizing ViTs for the primary task and a
Self-Supervised Auxiliary Task (SSAT) is surprisingly beneficial when the
amount of training data is limited. We explore the appropriate SSL tasks that
can be optimized alongside the primary task, the training schemes for these
tasks, and the data scale at which they can be most effective. Our findings
reveal that SSAT is a powerful technique that enables ViTs to leverage the
unique characteristics of both the self-supervised and primary tasks, achieving
better performance than typical ViTs pre-training with SSL and fine-tuning
sequentially. Our experiments, conducted on 10 datasets, demonstrate that SSAT
significantly improves ViT performance while reducing carbon footprint. We also
confirm the effectiveness of SSAT in the video domain for deepfake detection,
showcasing its generalizability. Our code is available at
https://github.com/dominickrei/Limited-data-vits.
Related papers
- Exploring Self-Supervised Vision Transformers for Deepfake Detection: A Comparative Analysis [38.074487843137064]
This paper investigates the effectiveness of self-supervised pre-trained vision transformers (ViTs) compared to supervised pre-trained ViTs and conventional neural networks (ConvNets) for detecting facial deepfake images and videos.
It examines their potential for improved generalization and explainability, especially with limited training data.
By leveraging SSL ViTs for deepfake detection with modest data and partial fine-tuning, we find comparable adaptability to deepfake detection and explainability via the attention mechanism.
arXiv Detail & Related papers (2024-05-01T07:16:49Z) - An Experimental Study on Exploring Strong Lightweight Vision Transformers via Masked Image Modeling Pre-Training [51.622652121580394]
Masked image modeling (MIM) pre-training for large-scale vision transformers (ViTs) has enabled promising downstream performance on top of the learned self-supervised ViT features.
In this paper, we question if the textitextremely simple lightweight ViTs' fine-tuning performance can also benefit from this pre-training paradigm.
Our pre-training with distillation on pure lightweight ViTs with vanilla/hierarchical design ($5.7M$/$6.5M$) can achieve $79.4%$/$78.9%$ top-1 accuracy on ImageNet-1
arXiv Detail & Related papers (2024-04-18T14:14:44Z) - DeiT-LT Distillation Strikes Back for Vision Transformer Training on Long-Tailed Datasets [30.178427266135756]
Vision Transformer (ViT) has emerged as a prominent architecture for various computer vision tasks.
ViT requires a large amount of data for pre-training.
We introduce DeiT-LT to tackle the problem of training ViTs from scratch on long-tailed datasets.
arXiv Detail & Related papers (2024-04-03T17:58:21Z) - Exploring Efficient Few-shot Adaptation for Vision Transformers [70.91692521825405]
We propose a novel efficient Transformer Tuning (eTT) method that facilitates finetuning ViTs in the Few-shot Learning tasks.
Key novelties come from the newly presented Attentive Prefix Tuning (APT) and Domain Residual Adapter (DRA)
We conduct extensive experiments to show the efficacy of our model.
arXiv Detail & Related papers (2023-01-06T08:42:05Z) - Where are my Neighbors? Exploiting Patches Relations in Self-Supervised
Vision Transformer [3.158346511479111]
We propose a simple but still effective self-supervised learning (SSL) strategy to train Vision Transformers (ViTs)
We define a set of SSL tasks based on relations of image patches that the model has to solve before or jointly during the downstream training.
Our RelViT model optimize all the output tokens of the transformer encoder that are related to the image patches, thus exploiting more training signal at each training step.
arXiv Detail & Related papers (2022-06-01T13:25:32Z) - DeiT III: Revenge of the ViT [56.46810490275699]
A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks.
Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT.
arXiv Detail & Related papers (2022-04-14T17:13:44Z) - Meta-attention for ViT-backed Continual Learning [35.31816553097367]
Vision transformers (ViTs) are gradually dominating the field of computer vision.
ViTs can suffer from severe performance degradation if straightforwardly applied to CNN-based continual learning.
We propose MEta-ATtention (MEAT) to adapt a pre-trained ViT to new tasks without sacrificing performance on already learned tasks.
arXiv Detail & Related papers (2022-03-22T12:58:39Z) - Self-Promoted Supervision for Few-Shot Transformer [178.52948452353834]
Self-promoted sUpervisioN (SUN) is a few-shot learning framework for vision transformers (ViTs)
SUN pretrains the ViT on the few-shot learning dataset and then uses it to generate individual location-specific supervision for guiding each patch token.
Experiments show that SUN using ViTs significantly surpasses other few-shot learning frameworks with ViTs and is the first one that achieves higher performance than those CNN state-of-the-arts.
arXiv Detail & Related papers (2022-03-14T12:53:27Z) - Auto-scaling Vision Transformers without Training [84.34662535276898]
We propose As-ViT, an auto-scaling framework for Vision Transformers (ViTs) without training.
As-ViT automatically discovers and scales up ViTs in an efficient and principled manner.
As a unified framework, As-ViT achieves strong performance on classification and detection.
arXiv Detail & Related papers (2022-02-24T06:30:55Z) - Self-slimmed Vision Transformer [52.67243496139175]
Vision transformers (ViTs) have become the popular structures and outperformed convolutional neural networks (CNNs) on various vision tasks.
We propose a generic self-slimmed learning approach for vanilla ViTs, namely SiT.
Specifically, we first design a novel Token Slimming Module (TSM), which can boost the inference efficiency of ViTs.
arXiv Detail & Related papers (2021-11-24T16:48:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.