OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free
Class-Incremental Learning
- URL: http://arxiv.org/abs/2402.04129v1
- Date: Tue, 6 Feb 2024 16:31:11 GMT
- Title: OVOR: OnePrompt with Virtual Outlier Regularization for Rehearsal-Free
Class-Incremental Learning
- Authors: Wei-Cheng Huang, Chun-Fu Chen, Hsiang Hsu
- Abstract summary: We propose a regularization method based on virtual outliers to tighten decision boundaries of the classifier.
A simplified prompt-based method can achieve results comparable to previous state-of-the-art (SOTA) methods equipped with a prompt pool.
- Score: 10.299813904573695
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent works have shown that by using large pre-trained models along with
learnable prompts, rehearsal-free methods for class-incremental learning (CIL)
settings can achieve superior performance to prominent rehearsal-based ones.
Rehearsal-free CIL methods struggle with distinguishing classes from different
tasks, as those are not trained together. In this work we propose a
regularization method based on virtual outliers to tighten decision boundaries
of the classifier, such that confusion of classes among different tasks is
mitigated. Recent prompt-based methods often require a pool of task-specific
prompts, in order to prevent overwriting knowledge of previous tasks with that
of the new task, leading to extra computation in querying and composing an
appropriate prompt from the pool. This additional cost can be eliminated,
without sacrificing accuracy, as we reveal in the paper. We illustrate that a
simplified prompt-based method can achieve results comparable to previous
state-of-the-art (SOTA) methods equipped with a prompt pool, using much less
learnable parameters and lower inference cost. Our regularization method has
demonstrated its compatibility with different prompt-based methods, boosting
those previous SOTA rehearsal-free CIL methods' accuracy on the ImageNet-R and
CIFAR-100 benchmarks. Our source code is available at
https://github.com/jpmorganchase/ovor.
Related papers
- Consistent Prompting for Rehearsal-Free Continual Learning [5.166083532861163]
Continual learning empowers models to adapt autonomously to the ever-changing environment or data streams without forgetting old knowledge.
Existing prompt-based methods are inconsistent between training and testing, limiting their effectiveness.
We propose a novel prompt-based method, Consistent Prompting (CPrompt), for more aligned training and testing.
arXiv Detail & Related papers (2024-03-13T14:24:09Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Understanding prompt engineering may not require rethinking
generalization [56.38207873589642]
We show that the discrete nature of prompts, combined with a PAC-Bayes prior given by a language model, results in generalization bounds that are remarkably tight by the standards of the literature.
This work provides a possible justification for the widespread practice of prompt engineering.
arXiv Detail & Related papers (2023-10-06T00:52:48Z) - When Prompt-based Incremental Learning Does Not Meet Strong Pretraining [36.0889029038102]
In this work, we develop a learnable Adaptive Prompt Generator (APG)
The key is to unify the prompt retrieval and prompt learning processes into a learnable prompt generator.
Our method significantly outperforms advanced methods in exemplar-free incremental learning without (strong) pretraining.
arXiv Detail & Related papers (2023-08-21T03:33:21Z) - Self-regulating Prompts: Foundational Model Adaptation without
Forgetting [112.66832145320434]
We introduce a self-regularization framework for prompting called PromptSRC.
PromptSRC guides the prompts to optimize for both task-specific and task-agnostic general representations.
arXiv Detail & Related papers (2023-07-13T17:59:35Z) - Streaming LifeLong Learning With Any-Time Inference [36.3326483579511]
We propose a novel lifelong learning approach, which is streaming, i.e., a single input sample arrives in each time step, single pass, class-incremental, and subject to be evaluated at any moment.
We additionally propose an implicit regularizer in the form of snap-shot self-distillation, which effectively minimizes the forgetting further.
Our empirical evaluations and ablations demonstrate that the proposed method outperforms the prior works by large margins.
arXiv Detail & Related papers (2023-01-27T18:09:19Z) - TEMPERA: Test-Time Prompting via Reinforcement Learning [57.48657629588436]
We propose Test-time Prompt Editing using Reinforcement learning (TEMPERA)
In contrast to prior prompt generation methods, TEMPERA can efficiently leverage prior knowledge.
Our method achieves 5.33x on average improvement in sample efficiency when compared to the traditional fine-tuning methods.
arXiv Detail & Related papers (2022-11-21T22:38:20Z) - Revisiting Deep Local Descriptor for Improved Few-Shot Classification [56.74552164206737]
We show how one can improve the quality of embeddings by leveraging textbfDense textbfClassification and textbfAttentive textbfPooling.
We suggest to pool feature maps by applying attentive pooling instead of the widely used global average pooling (GAP) to prepare embeddings for few-shot classification.
arXiv Detail & Related papers (2021-03-30T00:48:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.