SdAE: Self-distillated Masked Autoencoder
- URL: http://arxiv.org/abs/2208.00449v1
- Date: Sun, 31 Jul 2022 15:07:25 GMT
- Title: SdAE: Self-distillated Masked Autoencoder
- Authors: Yabo Chen, Yuchen Liu, Dongsheng Jiang, Xiaopeng Zhang, Wenrui Dai,
Hongkai Xiong, Qi Tian
- Abstract summary: Self-distillated masked AutoEncoder network SdAE is proposed in this paper.
With only 300 epochs pre-training, a vanilla ViT-Base model achieves an 84.1% fine-tuning accuracy on ImageNet-1k classification.
- Score: 95.3684955370897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the development of generative-based self-supervised learning (SSL)
approaches like BeiT and MAE, how to learn good representations by masking
random patches of the input image and reconstructing the missing information
has grown in concern. However, BeiT and PeCo need a "pre-pretraining" stage to
produce discrete codebooks for masked patches representing. MAE does not
require a pre-training codebook process, but setting pixels as reconstruction
targets may introduce an optimization gap between pre-training and downstream
tasks that good reconstruction quality may not always lead to the high
descriptive capability for the model. Considering the above issues, in this
paper, we propose a simple Self-distillated masked AutoEncoder network, namely
SdAE. SdAE consists of a student branch using an encoder-decoder structure to
reconstruct the missing information, and a teacher branch producing latent
representation of masked tokens. We also analyze how to build good views for
the teacher branch to produce latent representation from the perspective of
information bottleneck. After that, we propose a multi-fold masking strategy to
provide multiple masked views with balanced information for boosting the
performance, which can also reduce the computational complexity. Our approach
generalizes well: with only 300 epochs pre-training, a vanilla ViT-Base model
achieves an 84.1% fine-tuning accuracy on ImageNet-1k classification, 48.6 mIOU
on ADE20K segmentation, and 48.9 mAP on COCO detection, which surpasses other
methods by a considerable margin. Code is available at
https://github.com/AbrahamYabo/SdAE.
Related papers
- Adapting LLaMA Decoder to Vision Transformer [65.47663195233802]
This work examines whether decoder-only Transformers such as LLaMA can be adapted to the computer vision field.
We first "LLaMAfy" a standard ViT step-by-step to align with LLaMA's architecture, and find that directly applying a causal mask to the self-attention brings an attention collapse issue.
We develop a soft mask strategy that gradually introduces a causal mask to the self-attention at the onset of training to facilitate the optimization behavior.
arXiv Detail & Related papers (2024-04-10T06:30:08Z) - Regress Before Construct: Regress Autoencoder for Point Cloud
Self-supervised Learning [18.10704604275133]
Masked Autoencoders (MAE) have demonstrated promising performance in self-supervised learning for 2D and 3D computer vision.
We propose Point Regress AutoEncoder (Point-RAE), a new scheme for regressive autoencoders for point cloud self-supervised learning.
Our approach is efficient during pre-training and generalizes well on various downstream tasks.
arXiv Detail & Related papers (2023-09-25T17:23:33Z) - CL-MAE: Curriculum-Learned Masked Autoencoders [49.24994655813455]
We propose a curriculum learning approach that updates the masking strategy to continually increase the complexity of the self-supervised reconstruction task.
We train our Curriculum-Learned Masked Autoencoder (CL-MAE) on ImageNet and show that it exhibits superior representation learning capabilities compared to MAE.
arXiv Detail & Related papers (2023-08-31T09:13:30Z) - Masked Autoencoders are Efficient Class Incremental Learners [64.90846899051164]
Class Incremental Learning (CIL) aims to sequentially learn new classes while avoiding catastrophic forgetting of previous knowledge.
We propose to use Masked Autoencoders (MAEs) as efficient learners for CIL.
arXiv Detail & Related papers (2023-08-24T02:49:30Z) - MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments [72.6405488990753]
Self-supervised learning can be used for mitigating the greedy needs of Vision Transformer networks.
We propose a single-stage and standalone method, MOCA, which unifies both desired properties.
We achieve new state-of-the-art results on low-shot settings and strong experimental results in various evaluation protocols.
arXiv Detail & Related papers (2023-07-18T15:46:20Z) - MAGE: MAsked Generative Encoder to Unify Representation Learning and
Image Synthesis [33.46831766206675]
MAsked Generative (MAGE) is first framework to unify SOTA image generation and self-supervised representation learning.
Inspired by previous generative models, MAGE uses semantic tokens learned by a vector-quantized GAN at inputs and outputs.
On ImageNet-1K, a single MAGE ViT-L model obtains 9.10 FID in the task of class-unconditional image generation.
arXiv Detail & Related papers (2022-11-16T18:59:02Z) - Masked Autoencoders for Point Cloud Self-supervised Learning [27.894216954216716]
We propose a neat scheme of masked autoencoders for point cloud self-supervised learning.
We divide the input point cloud into irregular point patches and randomly mask them at a high ratio.
A standard Transformer based autoencoder, with an asymmetric design and a shifting mask tokens operation, learns high-level latent features from unmasked point patches.
arXiv Detail & Related papers (2022-03-13T09:23:39Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.