MILAN: Masked Image Pretraining on Language Assisted Representation
- URL: http://arxiv.org/abs/2208.06049v2
- Date: Mon, 15 Aug 2022 17:37:56 GMT
- Title: MILAN: Masked Image Pretraining on Language Assisted Representation
- Authors: Zejiang Hou, Fei Sun, Yen-Kuang Chen, Yuan Xie, Sun-Yuan Kung
- Abstract summary: In this work, we propose masked image pretraining on language assisted representation, dubbed as MILAN.
Instead of predicting raw pixels or low level features, our pretraining objective is to reconstruct the image features with substantial semantic signals.
Experimental results demonstrate that MILAN delivers higher accuracy than the previous works.
- Score: 30.24762638226569
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-attention based transformer models have been dominating many computer
vision tasks in the past few years. Their superb model qualities heavily depend
on the excessively large labeled image datasets. In order to reduce the
reliance on large labeled datasets, reconstruction based masked autoencoders
are gaining popularity, which learn high quality transferable representations
from unlabeled images. For the same purpose, recent weakly supervised image
pretraining methods explore language supervision from text captions
accompanying the images. In this work, we propose masked image pretraining on
language assisted representation, dubbed as MILAN. Instead of predicting raw
pixels or low level features, our pretraining objective is to reconstruct the
image features with substantial semantic signals that are obtained using
caption supervision. Moreover, to accommodate our reconstruction target, we
propose a more efficient prompting decoder architecture and a semantic aware
mask sampling mechanism, which further advance the transfer performance of the
pretrained model. Experimental results demonstrate that MILAN delivers higher
accuracy than the previous works. When the masked autoencoder is pretrained and
finetuned on ImageNet-1K dataset with an input resolution of 224x224, MILAN
achieves a top-1 accuracy of 85.4% on ViTB/16, surpassing previous
state-of-the-arts by 1%. In the downstream semantic segmentation task, MILAN
achieves 52.7 mIoU using ViT-B/16 backbone on ADE20K dataset, outperforming
previous masked pretraining results by 4 points.
Related papers
- Improve Supervised Representation Learning with Masked Image Modeling [30.30649867772395]
We propose a simple yet effective setup that can easily integrate masked image modeling into existing supervised training paradigms.
We show with minimal change in architecture and no overhead in inference that this setup is able to improve the quality of the learned representations.
arXiv Detail & Related papers (2023-12-01T22:03:25Z) - Mimic before Reconstruct: Enhancing Masked Autoencoders with Feature
Mimicking [35.11620617064127]
Masked Autoencoders (MAE) have been popular paradigms for large-scale vision representation pre-training.
We propose MR-MAE, which jointly learns high-level and low-level representations without interference during pre-training.
On ImageNet-1K, the MR-MAE base pre-trained for only 400 epochs achieves 85.8% top-1 accuracy after fine-tuning.
arXiv Detail & Related papers (2023-03-09T18:28:18Z) - FastMIM: Expediting Masked Image Modeling Pre-training for Vision [65.47756720190155]
FastMIM is a framework for pre-training vision backbones with low-resolution input images.
It reconstructs Histograms of Oriented Gradients (HOG) feature instead of original RGB values of the input images.
It can achieve 83.8%/84.1% top-1 accuracy on ImageNet-1K with ViT-B/Swin-B as backbones.
arXiv Detail & Related papers (2022-12-13T14:09:32Z) - Exploring the Coordination of Frequency and Attention in Masked Image Modeling [28.418445136155512]
Masked image modeling (MIM) has dominated self-supervised learning in computer vision.
We propose the Frequency & Attention-driven Masking and Throwing Strategy (FAMT), which can extract semantic patches and reduce the number of training patches.
FAMT can be seamlessly integrated as a plug-and-play module and surpasses previous works.
arXiv Detail & Related papers (2022-11-28T14:38:19Z) - A Unified View of Masked Image Modeling [117.79456335844439]
Masked image modeling has demonstrated great potential to eliminate the label-hungry problem of training large-scale vision Transformers.
We introduce a simple yet effective method, termed as MaskDistill, which reconstructs normalized semantic features from teacher models at the masked positions.
Experimental results on image classification and semantic segmentation show that MaskDistill achieves comparable or superior performance than state-of-the-art methods.
arXiv Detail & Related papers (2022-10-19T14:59:18Z) - BEiT v2: Masked Image Modeling with Vector-Quantized Visual Tokenizers [117.79456335844439]
We propose to use a semantic-rich visual tokenizer as the reconstruction target for masked prediction.
We then pretrain vision Transformers by predicting the original visual tokens for the masked image patches.
Experiments on image classification and semantic segmentation show that our approach outperforms all compared MIM methods.
arXiv Detail & Related papers (2022-08-12T16:48:10Z) - mc-BEiT: Multi-choice Discretization for Image BERT Pre-training [52.04866462439979]
Image BERT pre-training with masked image modeling (MIM) is a popular practice to cope with self-supervised representation learning.
We introduce an improved BERT-style image pre-training method, namely mc-BEiT, which performs MIM proxy tasks towards eased and refined multi-choice training objectives.
arXiv Detail & Related papers (2022-03-29T09:08:18Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z) - Vector-quantized Image Modeling with Improved VQGAN [93.8443646643864]
We propose a Vector-quantized Image Modeling approach that involves pretraining a Transformer to predict image tokens autoregressively.
We first propose multiple improvements over vanilla VQGAN from architecture to codebook learning, yielding better efficiency and reconstruction fidelity.
When trained on ImageNet at 256x256 resolution, we achieve Inception Score (IS) of 175.1 and Frechet Inception Distance (FID) of 4.17, a dramatic improvement over the vanilla VQGAN.
arXiv Detail & Related papers (2021-10-09T18:36:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.