Centroid-centered Modeling for Efficient Vision Transformer Pre-training
- URL: http://arxiv.org/abs/2303.04664v1
- Date: Wed, 8 Mar 2023 15:34:57 GMT
- Title: Centroid-centered Modeling for Efficient Vision Transformer Pre-training
- Authors: Xin Yan, Zuchao Li, Lefei Zhang, Bo Du, and Dacheng Tao
- Abstract summary: Masked Image Modeling (MIM) is a new self-supervised vision pre-training paradigm using Vision Transformer (ViT)
Our proposed approach, textbfCCViT, leverages k-means clustering to obtain centroids for image modeling without supervised training of tokenizer model.
Experiments show that the ViT-B model with only 300 epochs achieves 84.3% top-1 accuracy on ImageNet-1K classification and 51.6% on ADE20K semantic segmentation.
- Score: 109.18486172045701
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Masked Image Modeling (MIM) is a new self-supervised vision pre-training
paradigm using Vision Transformer (ViT). Previous works can be pixel-based or
token-based, using original pixels or discrete visual tokens from parametric
tokenizer models, respectively. Our proposed approach, \textbf{CCViT},
leverages k-means clustering to obtain centroids for image modeling without
supervised training of tokenizer model. The centroids represent patch pixels
and index tokens and have the property of local invariance. Non-parametric
centroid tokenizer only takes seconds to create and is faster for token
inference. Specifically, we adopt patch masking and centroid replacement
strategies to construct corrupted inputs, and two stacked encoder blocks to
predict corrupted patch tokens and reconstruct original patch pixels.
Experiments show that the ViT-B model with only 300 epochs achieves 84.3\%
top-1 accuracy on ImageNet-1K classification and 51.6\% on ADE20K semantic
segmentation. Our approach achieves competitive results with BEiTv2 without
distillation training from other models and outperforms other methods such as
MAE.
Related papers
- Bridging The Gaps Between Token Pruning and Full Pre-training via Masked
Fine-tuning [19.391064062033436]
Dynamic vision transformers are used to accelerate inference by pruning tokens redundant.
Current base models usually adopt full image training, using full images as inputs and keeping the whole feature maps through the forward process.
Inspired by MAE which performs masking and reconstruction self-supervised task, we devise masked fine-tuning to bridge the gaps between pre-trained base models and token pruning based dynamic vision transformers.
arXiv Detail & Related papers (2023-10-26T06:03:18Z) - Denoising Masked AutoEncoders are Certifiable Robust Vision Learners [37.04863068273281]
We propose a new self-supervised method, which is called Denoising Masked AutoEncoders (DMAE)
DMAE corrupts each image by adding Gaussian noises to each pixel value and randomly masking several patches.
A Transformer-based encoder-decoder model is then trained to reconstruct the original image from the corrupted one.
arXiv Detail & Related papers (2022-10-10T12:37:59Z) - ClusTR: Exploring Efficient Self-attention via Clustering for Vision
Transformers [70.76313507550684]
We propose a content-based sparse attention method, as an alternative to dense self-attention.
Specifically, we cluster and then aggregate key and value tokens, as a content-based method of reducing the total token count.
The resulting clustered-token sequence retains the semantic diversity of the original signal, but can be processed at a lower computational cost.
arXiv Detail & Related papers (2022-08-28T04:18:27Z) - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers [117.79456335844439]
We propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction.
We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches.
Experiments on image classification and semantic segmentation show that our approach outperforms all compared MIM methods.
arXiv Detail & Related papers (2022-08-12T16:48:10Z) - mc-BEiT: Multi-choice Discretization for Image BERT Pre-training [52.04866462439979]
Image BERT pre-training with masked image modeling (MIM) is a popular practice to cope with self-supervised representation learning.
We introduce an improved BERT-style image pre-training method, namely mc-BEiT, which performs MIM proxy tasks towards eased and refined multi-choice training objectives.
arXiv Detail & Related papers (2022-03-29T09:08:18Z) - Corrupted Image Modeling for Self-Supervised Visual Pre-Training [103.99311611776697]
We introduce Corrupted Image Modeling (CIM) for self-supervised visual pre-training.
CIM uses an auxiliary generator with a small trainable BEiT to corrupt the input image instead of using artificial mask tokens.
After pre-training, the enhancer can be used as a high-capacity visual encoder for downstream tasks.
arXiv Detail & Related papers (2022-02-07T17:59:04Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.