Masked Autoencoders are Efficient Class Incremental Learners
- URL: http://arxiv.org/abs/2308.12510v1
- Date: Thu, 24 Aug 2023 02:49:30 GMT
- Title: Masked Autoencoders are Efficient Class Incremental Learners
- Authors: Jiang-Tian Zhai, Xialei Liu, Andrew D. Bagdanov, Ke Li, Ming-Ming
Cheng
- Abstract summary: Class Incremental Learning (CIL) aims to sequentially learn new classes while avoiding catastrophic forgetting of previous knowledge.
We propose to use Masked Autoencoders (MAEs) as efficient learners for CIL.
- Score: 64.90846899051164
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class Incremental Learning (CIL) aims to sequentially learn new classes while
avoiding catastrophic forgetting of previous knowledge. We propose to use
Masked Autoencoders (MAEs) as efficient learners for CIL. MAEs were originally
designed to learn useful representations through reconstructive unsupervised
learning, and they can be easily integrated with a supervised loss for
classification. Moreover, MAEs can reliably reconstruct original input images
from randomly selected patches, which we use to store exemplars from past tasks
more efficiently for CIL. We also propose a bilateral MAE framework to learn
from image-level and embedding-level fusion, which produces better-quality
reconstructed images and more stable representations. Our experiments confirm
that our approach performs better than the state-of-the-art on CIFAR-100,
ImageNet-Subset, and ImageNet-Full. The code is available at
https://github.com/scok30/MAE-CIL .
Related papers
- CL-MAE: Curriculum-Learned Masked Autoencoders [49.24994655813455]
We propose a curriculum learning approach that updates the masking strategy to continually increase the complexity of the self-supervised reconstruction task.
We train our Curriculum-Learned Masked Autoencoder (CL-MAE) on ImageNet and show that it exhibits superior representation learning capabilities compared to MAE.
arXiv Detail & Related papers (2023-08-31T09:13:30Z) - Improving Masked Autoencoders by Learning Where to Mask [65.89510231743692]
Masked image modeling is a promising self-supervised learning method for visual data.
We present AutoMAE, a framework that uses Gumbel-Softmax to interlink an adversarially-trained mask generator and a mask-guided image modeling process.
In our experiments, AutoMAE is shown to provide effective pretraining models on standard self-supervised benchmarks and downstream tasks.
arXiv Detail & Related papers (2023-03-12T05:28:55Z) - Masked Contrastive Representation Learning [6.737710830712818]
This work presents Masked Contrastive Representation Learning (MACRL) for self-supervised visual pre-training.
We adopt an asymmetric setting for the siamese network (i.e., encoder-decoder structure in both branches), where one branch with higher mask ratio and stronger data augmentation, while the other adopts weaker data corruptions.
In our experiments, MACRL presents superior results on various vision benchmarks, including CIFAR-10, CIFAR-100, Tiny-ImageNet, and two other ImageNet subsets.
arXiv Detail & Related papers (2022-11-11T05:32:28Z) - SdAE: Self-distillated Masked Autoencoder [95.3684955370897]
Self-distillated masked AutoEncoder network SdAE is proposed in this paper.
With only 300 epochs pre-training, a vanilla ViT-Base model achieves an 84.1% fine-tuning accuracy on ImageNet-1k classification.
arXiv Detail & Related papers (2022-07-31T15:07:25Z) - SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners [20.846232536796578]
Self-supervised Masked Autoencoders (MAE) have attracted unprecedented attention for their impressive representation learning ability.
This paper extends MAE to a fully supervised setting by adding a supervised classification branch.
The proposed Supervised MAE (SupMAE) only exploits a visible subset of image patches for classification, unlike the standard supervised pre-training where all image patches are used.
arXiv Detail & Related papers (2022-05-28T23:05:03Z) - Adversarial Masking for Self-Supervised Learning [81.25999058340997]
Masked image model (MIM) framework for self-supervised learning, ADIOS, is proposed.
It simultaneously learns a masking function and an image encoder using an adversarial objective.
It consistently improves on state-of-the-art self-supervised learning (SSL) methods on a variety of tasks and datasets.
arXiv Detail & Related papers (2022-01-31T10:23:23Z) - Masked Autoencoders Are Scalable Vision Learners [60.97703494764904]
Masked autoencoders (MAE) are scalable self-supervised learners for computer vision.
Our MAE approach is simple: we mask random patches of the input image and reconstruct the missing pixels.
Coupling these two designs enables us to train large models efficiently and effectively.
arXiv Detail & Related papers (2021-11-11T18:46:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.