Multimodal Parameter-Efficient Few-Shot Class Incremental Learning
- URL: http://arxiv.org/abs/2303.04751v2
- Date: Mon, 8 Jan 2024 12:28:19 GMT
- Title: Multimodal Parameter-Efficient Few-Shot Class Incremental Learning
- Authors: Marco D'Alessandro, Alberto Alonso, Enrique Calabr\'es, Mikel Galar
- Abstract summary: Few-Shot Class Incremental Learning (FSCIL) is a challenging continual learning task, where limited training examples are available during several learning sessions.
To succeed in this task, it is necessary to avoid over-fitting new classes caused by biased distributions in the few-shot training sets.
CPE-CLIP significantly improves FSCIL performance compared to state-of-the-art proposals while also drastically reducing the number of learnable parameters and training costs.
- Score: 1.9220716793379256
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Few-Shot Class Incremental Learning (FSCIL) is a challenging continual
learning task, where limited training examples are available during several
learning sessions. To succeed in this task, it is necessary to avoid
over-fitting new classes caused by biased distributions in the few-shot
training sets. The general approach to address this issue involves enhancing
the representational capability of a pre-defined backbone architecture by
adding special modules for backward compatibility with older classes. However,
this approach has not yet solved the dilemma of ensuring high classification
accuracy over time while reducing the gap between the performance obtained on
larger training sets and the smaller ones. In this work, we propose an
alternative approach called Continual Parameter-Efficient CLIP (CPE-CLIP) to
reduce the loss of information between different learning sessions. Instead of
adapting additional modules to address information loss, we leverage the vast
knowledge acquired by CLIP in large-scale pre-training and its effectiveness in
generalizing to new concepts. Our approach is multimodal and
parameter-efficient, relying on learnable prompts for both the language and
vision encoders to enable transfer learning across sessions. We also introduce
prompt regularization to improve performance and prevent forgetting. Our
experimental results demonstrate that CPE-CLIP significantly improves FSCIL
performance compared to state-of-the-art proposals while also drastically
reducing the number of learnable parameters and training costs.
Related papers
- Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods [69.36397993451742]
This work introduces Context-aware Prompt Tuning (CPT), a method inspired by ICL, PT, and adversarial attacks.
We modify specific context tokens, considering the unique structure of input and output formats.
Inspired by adversarial attacks, we adjust the input based on the labels present in the context, focusing on minimizing, rather than maximizing, the loss.
arXiv Detail & Related papers (2024-10-22T17:45:47Z) - Temporal-Difference Variational Continual Learning [89.32940051152782]
A crucial capability of Machine Learning models in real-world applications is the ability to continuously learn new tasks.
In Continual Learning settings, models often struggle to balance learning new tasks with retaining previous knowledge.
We propose new learning objectives that integrate the regularization effects of multiple previous posterior estimations.
arXiv Detail & Related papers (2024-10-10T10:58:41Z) - Denoising Pre-Training and Customized Prompt Learning for Efficient Multi-Behavior Sequential Recommendation [69.60321475454843]
We propose DPCPL, the first pre-training and prompt-tuning paradigm tailored for Multi-Behavior Sequential Recommendation.
In the pre-training stage, we propose a novel Efficient Behavior Miner (EBM) to filter out the noise at multiple time scales.
Subsequently, we propose to tune the pre-trained model in a highly efficient manner with the proposed Customized Prompt Learning (CPL) module.
arXiv Detail & Related papers (2024-08-21T06:48:38Z) - SLCA++: Unleash the Power of Sequential Fine-tuning for Continual Learning with Pre-training [68.7896349660824]
We present an in-depth analysis of the progressive overfitting problem from the lens of Seq FT.
Considering that the overly fast representation learning and the biased classification layer constitute this particular problem, we introduce the advanced Slow Learner with Alignment (S++) framework.
Our approach involves a Slow Learner to selectively reduce the learning rate of backbone parameters, and a Alignment to align the disjoint classification layers in a post-hoc fashion.
arXiv Detail & Related papers (2024-08-15T17:50:07Z) - Beyond Prompt Learning: Continual Adapter for Efficient Rehearsal-Free Continual Learning [22.13331870720021]
We propose a beyond prompt learning approach to the RFCL task, called Continual Adapter (C-ADA)
C-ADA flexibly extends specific weights in CAL to learn new knowledge for each task and freezes old weights to preserve prior knowledge.
Our approach achieves significantly improved performance and training speed, outperforming the current state-of-the-art (SOTA) method.
arXiv Detail & Related papers (2024-07-14T17:40:40Z) - SHERL: Synthesizing High Accuracy and Efficient Memory for Resource-Limited Transfer Learning [63.93193829913252]
We propose an innovative METL strategy called SHERL for resource-limited scenarios.
In the early route, intermediate outputs are consolidated via an anti-redundancy operation.
In the late route, utilizing minimal late pre-trained layers could alleviate the peak demand on memory overhead.
arXiv Detail & Related papers (2024-07-10T10:22:35Z) - Enhanced Few-Shot Class-Incremental Learning via Ensemble Models [34.84881941101568]
Few-shot class-incremental learning aims to continually fit new classes with limited training data.
The main challenges are overfitting the rare new training samples and forgetting old classes.
We propose a new ensemble model framework cooperating with data augmentation to boost generalization.
arXiv Detail & Related papers (2024-01-14T06:07:07Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Efficient Feature Transformations for Discriminative and Generative
Continual Learning [98.10425163678082]
We propose a simple task-specific feature map transformation strategy for continual learning.
Theses provide powerful flexibility for learning new tasks, achieved with minimal parameters added to the base architecture.
We demonstrate the efficacy and efficiency of our method with an extensive set of experiments in discriminative (CIFAR-100 and ImageNet-1K) and generative sequences of tasks.
arXiv Detail & Related papers (2021-03-25T01:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.