Incremental Prototype Prompt-tuning with Pre-trained Representation for
Class Incremental Learning
- URL: http://arxiv.org/abs/2204.03410v2
- Date: Mon, 11 Apr 2022 01:30:00 GMT
- Title: Incremental Prototype Prompt-tuning with Pre-trained Representation for
Class Incremental Learning
- Authors: Jieren Deng and Jianhua Hu and Haojian Zhang and Yunkuan Wang
- Abstract summary: Class incremental learning has attracted much attention, but most existing works still continually fine-tune the representation model.
We take the pre-train-and-prompt-tuning paradigm to sequentially learn new visual concepts based on a fixed semantic rich pre-trained representation model.
Our method consistently outperforms other state-of-the-art methods with a large margin.
- Score: 4.717066668969749
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Class incremental learning has attracted much attention, but most existing
works still continually fine-tune the representation model, resulting in much
catastrophic forgetting. Instead of struggling to fight against such forgetting
by replaying or distillation like most of the existing methods, we take the
pre-train-and-prompt-tuning paradigm to sequentially learn new visual concepts
based on a fixed semantic rich pre-trained representation model by incremental
prototype prompt-tuning (IPP), which substantially reduces the catastrophic
forgetting. In addition, an example prototype classification is proposed to
compensate for semantic drift, the problem caused by learning bias at different
phases. Extensive experiments conducted on the three incremental learning
benchmarks demonstrate that our method consistently outperforms other
state-of-the-art methods with a large margin.
Related papers
- Semantically-Shifted Incremental Adapter-Tuning is A Continual ViTransformer [44.10678347943115]
Class-incremental learning (CIL) aims to enable models to continuously learn new classes while overcoming catastrophic forgetting.
In this paper, we revisit different parameter-efficient tuning (PET) methods within the context of continual learning.
We observe that adapter tuning demonstrates superiority over prompt-based methods, even without parameter expansion in each learning session.
arXiv Detail & Related papers (2024-03-29T05:23:12Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - FD-Align: Feature Discrimination Alignment for Fine-tuning Pre-Trained
Models in Few-Shot Learning [21.693779973263172]
In this paper, we introduce a fine-tuning approach termed Feature Discrimination Alignment (FD-Align)
Our method aims to bolster the model's generalizability by preserving the consistency of spurious features.
Once fine-tuned, the model can seamlessly integrate with existing methods, leading to performance improvements.
arXiv Detail & Related papers (2023-10-23T17:12:01Z) - Prototypical quadruplet for few-shot class incremental learning [24.814045065163135]
We propose a novel method that improves classification robustness by identifying a better embedding space using an improved contrasting loss.
Our approach retains previously acquired knowledge in the embedding space, even when trained with new classes.
We demonstrate the effectiveness of our method by showing that the embedding space remains intact after training the model with new classes and outperforms existing state-of-the-art algorithms in terms of accuracy across different sessions.
arXiv Detail & Related papers (2022-11-05T17:19:14Z) - Improving Fine-tuning of Self-supervised Models with Contrastive
Initialization [11.595212661616259]
We propose a Contrastive Initialization (COIN) method that breaks the standard fine-tuning pipeline.
Our COIN significantly outperforms existing methods without introducing extra training cost.
arXiv Detail & Related papers (2022-07-30T14:45:57Z) - Contrastive Learning for Fair Representations [50.95604482330149]
Trained classification models can unintentionally lead to biased representations and predictions.
Existing debiasing methods for classification models, such as adversarial training, are often expensive to train and difficult to optimise.
We propose a method for mitigating bias by incorporating contrastive learning, in which instances sharing the same class label are encouraged to have similar representations.
arXiv Detail & Related papers (2021-09-22T10:47:51Z) - Avoiding Inference Heuristics in Few-shot Prompt-based Finetuning [57.4036085386653]
We show that prompt-based models for sentence pair classification tasks still suffer from a common pitfall of adopting inferences based on lexical overlap.
We then show that adding a regularization that preserves pretraining weights is effective in mitigating this destructive tendency of few-shot finetuning.
arXiv Detail & Related papers (2021-09-09T10:10:29Z) - Self-Promoted Prototype Refinement for Few-Shot Class-Incremental
Learning [81.10531943939365]
Few-shot class-incremental learning is to recognize the new classes given few samples and not forget the old classes.
We propose a novel incremental prototype learning scheme that adapts the feature representation to various generated incremental episodes.
Experiments on three benchmark datasets demonstrate the above-par incremental performance, outperforming state-of-the-art methods by a margin of 13%, 17% and 11%, respectively.
arXiv Detail & Related papers (2021-07-19T14:31:33Z) - Unleashing the Power of Contrastive Self-Supervised Visual Models via
Contrast-Regularized Fine-Tuning [94.35586521144117]
We investigate whether applying contrastive learning to fine-tuning would bring further benefits.
We propose Contrast-regularized tuning (Core-tuning), a novel approach for fine-tuning contrastive self-supervised visual models.
arXiv Detail & Related papers (2021-02-12T16:31:24Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.