A Simple Baseline that Questions the Use of Pretrained-Models in
Continual Learning
- URL: http://arxiv.org/abs/2210.04428v2
- Date: Wed, 29 Mar 2023 10:05:04 GMT
- Title: A Simple Baseline that Questions the Use of Pretrained-Models in
Continual Learning
- Authors: Paul Janson, Wenxuan Zhang, Rahaf Aljundi, Mohamed Elhoseiny
- Abstract summary: Some methods design continual learning mechanisms on the pre-trained representations and only allow minimum updates or even no updates of the backbone models during the training of continual learning.
We argue that the pretrained feature extractor itself can be strong enough to achieve a competitive or even better continual learning performance on Split-CIFAR100 and CoRe 50 benchmarks.
This baseline achieved 88.53% on 10-Split-CIFAR-100, surpassing most state-of-the-art continual learning methods that are all using the same pretrained transformer model.
- Score: 30.023047201419825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the success of pretraining techniques in representation learning, a
number of continual learning methods based on pretrained models have been
proposed. Some of these methods design continual learning mechanisms on the
pre-trained representations and only allow minimum updates or even no updates
of the backbone models during the training of continual learning. In this
paper, we question whether the complexity of these models is needed to achieve
good performance by comparing them to a simple baseline that we designed. We
argue that the pretrained feature extractor itself can be strong enough to
achieve a competitive or even better continual learning performance on
Split-CIFAR100 and CoRe 50 benchmarks. To validate this, we conduct a very
simple baseline that 1) use the frozen pretrained model to extract image
features for every class encountered during the continual learning stage and
compute their corresponding mean features on training data, and 2) predict the
class of the input based on the nearest neighbor distance between test samples
and mean features of the classes; i.e., Nearest Mean Classifier (NMC). This
baseline is single-headed, exemplar-free, and can be task-free (by updating the
means continually). This baseline achieved 88.53% on 10-Split-CIFAR-100,
surpassing most state-of-the-art continual learning methods that are all
initialized using the same pretrained transformer model. We hope our baseline
may encourage future progress in designing learning systems that can
continually add quality to the learning representations even if they started
from some pretrained weights.
Related papers
- Transferable Post-training via Inverse Value Learning [83.75002867411263]
We propose modeling changes at the logits level during post-training using a separate neural network (i.e., the value network)
After training this network on a small base model using demonstrations, this network can be seamlessly integrated with other pre-trained models during inference.
We demonstrate that the resulting value network has broad transferability across pre-trained models of different parameter sizes.
arXiv Detail & Related papers (2024-10-28T13:48:43Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Random Representations Outperform Online Continually Learned Representations [68.42776779425978]
We show that existing online continually trained deep networks produce inferior representations compared to a simple pre-defined random transforms.
Our method, called RanDumb, significantly outperforms state-of-the-art continually learned representations across all online continual learning benchmarks.
Our study reveals the significant limitations of representation learning, particularly in low-exemplar and online continual learning scenarios.
arXiv Detail & Related papers (2024-02-13T22:07:29Z) - Meta-Learned Attribute Self-Interaction Network for Continual and
Generalized Zero-Shot Learning [46.6282595346048]
Zero-shot learning (ZSL) is a promising approach to generalizing a model to unseen categories during training.
We propose a Meta-learned Attribute self-Interaction Network (MAIN) for continual ZSL.
By pairing attribute self-interaction trained using meta-learning with inverse regularization of the attribute encoder, we are able to outperform state-of-the-art results without leveraging the unseen class attributes.
arXiv Detail & Related papers (2023-12-02T16:23:01Z) - PILOT: A Pre-Trained Model-Based Continual Learning Toolbox [71.63186089279218]
This paper introduces a pre-trained model-based continual learning toolbox known as PILOT.
On the one hand, PILOT implements some state-of-the-art class-incremental learning algorithms based on pre-trained models, such as L2P, DualPrompt, and CODA-Prompt.
On the other hand, PILOT fits typical class-incremental learning algorithms within the context of pre-trained models to evaluate their effectiveness.
arXiv Detail & Related papers (2023-09-13T17:55:11Z) - RanPAC: Random Projections and Pre-trained Models for Continual Learning [59.07316955610658]
Continual learning (CL) aims to learn different tasks (such as classification) in a non-stationary data stream without forgetting old ones.
We propose a concise and effective approach for CL with pre-trained models.
arXiv Detail & Related papers (2023-07-05T12:49:02Z) - Effective training-time stacking for ensembling of deep neural networks [1.2667973028134798]
A snapshot ensembling collects models in the ensemble along a single training path.
Our method improves snapshot ensembling by selecting and weighting ensemble members along the training path.
It relies on training-time likelihoods without looking at validation sample errors that standard stacking methods do.
arXiv Detail & Related papers (2022-06-27T17:52:53Z) - Simpler is Better: off-the-shelf Continual Learning Through Pretrained
Backbones [0.0]
We propose a baseline (off-the-shelf) for Continual Learning of Computer Vision problems.
We exploit the power of pretrained models to compute a class prototype and fill a memory bank.
We compare our pipeline with common CNN models and show the superiority of Vision Transformers.
arXiv Detail & Related papers (2022-05-03T16:03:46Z) - Jigsaw Clustering for Unsupervised Visual Representation Learning [68.09280490213399]
We propose a new jigsaw clustering pretext task in this paper.
Our method makes use of information from both intra- and inter-images.
It is even comparable to the contrastive learning methods when only half of training batches are used.
arXiv Detail & Related papers (2021-04-01T08:09:26Z) - Deep Ensembles for Low-Data Transfer Learning [21.578470914935938]
We study different ways of creating ensembles from pre-trained models.
We show that the nature of pre-training itself is a performant source of diversity.
We propose a practical algorithm that efficiently identifies a subset of pre-trained models for any downstream dataset.
arXiv Detail & Related papers (2020-10-14T07:59:00Z) - Efficient Learning of Model Weights via Changing Features During
Training [0.0]
We propose a machine learning model, which dynamically changes the features during training.
Our main motivation is to update the model in a small content during the training process with replacing less descriptive features to new ones from a large pool.
arXiv Detail & Related papers (2020-02-21T12:38:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.