Does Prior Data Matter? Exploring Joint Training in the Context of Few-Shot Class-Incremental Learning
- URL: http://arxiv.org/abs/2503.10003v2
- Date: Mon, 18 Aug 2025 13:19:46 GMT
- Title: Does Prior Data Matter? Exploring Joint Training in the Context of Few-Shot Class-Incremental Learning
- Authors: Shiwon Kim, Dongjun Hwang, Sungwon Woo, Rita Singh,
- Abstract summary: Class-incremental learning (CIL) aims to adapt to continuously emerging new classes while preserving knowledge of previously learned ones.<n>Few-shot class-incremental learning (FSCIL) presents a greater challenge that requires the model to learn new classes from only a limited number of samples per class.
- Score: 9.682677147166391
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-incremental learning (CIL) aims to adapt to continuously emerging new classes while preserving knowledge of previously learned ones. Few-shot class-incremental learning (FSCIL) presents a greater challenge that requires the model to learn new classes from only a limited number of samples per class. While incremental learning typically assumes restricted access to past data, it often remains available in many real-world scenarios. This raises a practical question: should one retrain the model on the full dataset (i.e., joint training), or continue updating it solely with new data? In CIL, joint training is considered an ideal benchmark that provides a reference for evaluating the trade-offs between performance and computational cost. However, in FSCIL, joint training becomes less reliable due to severe imbalance between base and incremental classes. This results in the absence of a practical baseline, making it unclear which strategy is preferable for practitioners. To this end, we revisit joint training in the context of FSCIL by incorporating imbalance mitigation techniques, and suggest a new imbalance-aware joint training benchmark for FSCIL. We then conduct extensive comparisons between this benchmark and FSCIL methods to analyze which approach is most suitable when prior data is accessible. Our analysis offers realistic insights and guidance for selecting training strategies in real-world FSCIL scenarios. Code is available at: https://github.com/shiwonkim/Joint_FSCIL
Related papers
- SPaRFT: Self-Paced Reinforcement Fine-Tuning for Large Language Models [51.74498855100541]
Large language models (LLMs) have shown strong reasoning capabilities when fine-tuned with reinforcement learning (RL)<n>We propose textbfSPaRFT, a self-paced learning framework that enables efficient learning based on the capability of the model being trained.
arXiv Detail & Related papers (2025-08-07T03:50:48Z) - CALA: A Class-Aware Logit Adapter for Few-Shot Class-Incremental Learning [9.963958892953876]
We propose a lightweight adapter that learns to rectify biased predictions through a pseudo-incremental learning paradigm.<n>Our method involves a lightweight adapter that learns to rectify biased predictions through a pseudo-incremental learning paradigm.
arXiv Detail & Related papers (2024-12-17T08:21:46Z) - Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning [52.63674911541416]
Few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and forgetting.
Our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL.
We find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability.
arXiv Detail & Related papers (2024-10-08T02:23:16Z) - Few-shot Tuning of Foundation Models for Class-incremental Learning [19.165004570789755]
We propose a new approach to continually tune foundation models for new classes in few-shot settings.
CoACT shows up to 13.5% improvement in standard FSCIL over the current SOTA on benchmark evaluations.
arXiv Detail & Related papers (2024-05-26T16:41:03Z) - MetaCoCo: A New Few-Shot Classification Benchmark with Spurious Correlation [46.50551811108464]
We present a benchmark with spurious-correlation shifts collected from real-world scenarios.
We also propose a metric by using CLIP as a pre-trained vision-language model.
The experimental results show that the performance of the existing methods degrades significantly in the presence of spurious-correlation shifts.
arXiv Detail & Related papers (2024-04-30T15:45:30Z) - F-OAL: Forward-only Online Analytic Learning with Fast Training and Low Memory Footprint in Class Incremental Learning [28.772554281694166]
Online Class Incremental Learning (OCIL) aims to train models incrementally, where data arrive in mini-batches, and previous data are not accessible.
A major challenge in OCIL is Catastrophic Forgetting, i.e., the loss of previously learned knowledge.
We propose an exemplar-free approach--Forward-only Online Analytic Learning (F-OAL)
Unlike traditional methods, F-OAL does not rely on back-propagation and is forward-only, significantly reducing memory usage and computational time.
arXiv Detail & Related papers (2024-03-23T07:39:13Z) - Controllable Relation Disentanglement for Few-Shot Class-Incremental Learning [82.79371269942146]
We propose to tackle FewShot Class-Incremental Learning (FSCIL) from a new perspective, i.e., relation disentanglement.
The challenge of disentangling spurious correlations lies in the poor controllability of FSCIL.
We propose a new simple-yeteffective method, called ConTrollable Relation-disentang FewShot Class-Incremental Learning (CTRL-FSCIL)
arXiv Detail & Related papers (2024-03-17T03:16:59Z) - Rethinking Classifier Re-Training in Long-Tailed Recognition: A Simple
Logits Retargeting Approach [102.0769560460338]
We develop a simple logits approach (LORT) without the requirement of prior knowledge of the number of samples per class.
Our method achieves state-of-the-art performance on various imbalanced datasets, including CIFAR100-LT, ImageNet-LT, and iNaturalist 2018.
arXiv Detail & Related papers (2024-03-01T03:27:08Z) - Few-Shot Class-Incremental Learning with Prior Knowledge [94.95569068211195]
We propose Learning with Prior Knowledge (LwPK) to enhance the generalization ability of the pre-trained model.
Experimental results indicate that LwPK effectively enhances the model resilience against catastrophic forgetting.
arXiv Detail & Related papers (2024-02-02T08:05:35Z) - Bias Mitigating Few-Shot Class-Incremental Learning [17.185744533050116]
Few-shot class-incremental learning aims at recognizing novel classes continually with limited novel class samples.
Recent methods somewhat alleviate the accuracy imbalance between base and incremental classes by fine-tuning the feature extractor in the incremental sessions.
We propose a novel method to mitigate model bias of the FSCIL problem during training and inference processes.
arXiv Detail & Related papers (2024-02-01T10:37:41Z) - Enhancing Consistency and Mitigating Bias: A Data Replay Approach for Incremental Learning [93.90047628101155]
Deep learning systems are prone to catastrophic forgetting when learning from a sequence of tasks.<n>To address this, some methods propose replaying data from previous tasks during new task learning.<n>However, it is not expected in practice due to memory constraints and data privacy issues.
arXiv Detail & Related papers (2024-01-12T12:51:12Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - Architecture, Dataset and Model-Scale Agnostic Data-free Meta-Learning [117.48444197402858]
We propose ePisode cUrriculum inveRsion (ECI) during data-free meta training and invErsion calibRation following inner loop (ICFIL) during meta testing.<n>ECI adaptively increases the difficulty level of pseudo episodes according to the real-time feedback of the meta model.<n>We formulate the optimization process of meta training with ECI as an adversarial form in an end-to-end manner.
arXiv Detail & Related papers (2023-03-20T15:10:41Z) - Margin-Based Few-Shot Class-Incremental Learning with Class-Level
Overfitting Mitigation [19.975435754433754]
Few-shot class-incremental learning (FSCIL) is designed to incrementally recognize novel classes with only few training samples.
A well known modification to the base-class training is to apply a margin to the base-class classification.
We propose a novel margin-based FSCIL method to mitigate the CO problem by providing the pattern learning process with extra constraint from the margin-based patterns themselves.
arXiv Detail & Related papers (2022-10-10T09:45:53Z) - You Only Need End-to-End Training for Long-Tailed Recognition [8.789819609485225]
Cross-entropy loss tends to produce highly correlated features on imbalanced data.
We propose two novel modules, Block-based Relatively Balanced Batch Sampler (B3RS) and Batch Embedded Training (BET)
Experimental results on the long-tailed classification benchmarks, CIFAR-LT and ImageNet-LT, demonstrate the effectiveness of our method.
arXiv Detail & Related papers (2021-12-11T11:44:09Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.