Generalized and Incremental Few-Shot Learning by Explicit Learning and
Calibration without Forgetting
- URL: http://arxiv.org/abs/2108.08165v1
- Date: Wed, 18 Aug 2021 14:21:43 GMT
- Title: Generalized and Incremental Few-Shot Learning by Explicit Learning and
Calibration without Forgetting
- Authors: Anna Kukleva, Hilde Kuehne, Bernt Schiele
- Abstract summary: We propose a three-stage framework that allows to explicitly and effectively address these challenges.
We evaluate the proposed framework on four challenging benchmark datasets for image and video few-shot classification.
- Score: 86.56447683502951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Both generalized and incremental few-shot learning have to deal with three
major challenges: learning novel classes from only few samples per class,
preventing catastrophic forgetting of base classes, and classifier calibration
across novel and base classes. In this work we propose a three-stage framework
that allows to explicitly and effectively address these challenges. While the
first phase learns base classes with many samples, the second phase learns a
calibrated classifier for novel classes from few samples while also preventing
catastrophic forgetting. In the final phase, calibration is achieved across all
classes. We evaluate the proposed framework on four challenging benchmark
datasets for image and video few-shot classification and obtain
state-of-the-art results for both generalized and incremental few shot
learning.
Related papers
- Covariance-based Space Regularization for Few-shot Class Incremental Learning [25.435192867105552]
Few-shot Class Incremental Learning (FSCIL) requires the model to continually learn new classes with limited labeled data.
Due to the limited data in incremental sessions, models are prone to overfitting new classes and suffering catastrophic forgetting of base classes.
Recent advancements resort to prototype-based approaches to constrain the base class distribution and learn discriminative representations of new classes.
arXiv Detail & Related papers (2024-11-02T08:03:04Z) - Liberating Seen Classes: Boosting Few-Shot and Zero-Shot Text Classification via Anchor Generation and Classification Reframing [38.84431954053434]
Few-shot and zero-shot text classification aim to recognize samples from novel classes with limited labeled samples or no labeled samples at all.
We propose a simple and effective strategy for few-shot and zero-shot text classification.
arXiv Detail & Related papers (2024-05-06T15:38:32Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - S3C: Self-Supervised Stochastic Classifiers for Few-Shot
Class-Incremental Learning [22.243176199188238]
Few-shot class-incremental learning (FSCIL) aims to learn progressively about new classes with very few labeled samples, without forgetting the knowledge of already learnt classes.
FSCIL suffers from two major challenges: (i) over-fitting on the new classes due to limited amount of data, (ii) catastrophically forgetting about the old classes due to unavailability of data from these classes in the incremental stages.
arXiv Detail & Related papers (2023-07-05T12:41:46Z) - Class-Incremental Learning with Strong Pre-trained Models [97.84755144148535]
Class-incremental learning (CIL) has been widely studied under the setting of starting from a small number of classes (base classes)
We explore an understudied real-world setting of CIL that starts with a strong model pre-trained on a large number of base classes.
Our proposed method is robust and generalizes to all analyzed CIL settings.
arXiv Detail & Related papers (2022-04-07T17:58:07Z) - Generalized Few-Shot Semantic Segmentation: All You Need is Fine-Tuning [35.51193811629467]
Generalized few-shot semantic segmentation was introduced to move beyond only evaluating few-shot segmentation models on novel classes.
While all approaches currently are based on meta-learning, they perform poorly and saturate in learning after observing only a few shots.
We propose the first fine-tuning solution, and demonstrate that it addresses the saturation problem while achieving state-of-art results on two datasets.
arXiv Detail & Related papers (2021-12-21T04:44:57Z) - Bridging Non Co-occurrence with Unlabeled In-the-wild Data for
Incremental Object Detection [56.22467011292147]
Several incremental learning methods are proposed to mitigate catastrophic forgetting for object detection.
Despite the effectiveness, these methods require co-occurrence of the unlabeled base classes in the training data of the novel classes.
We propose the use of unlabeled in-the-wild data to bridge the non-occurrence caused by the missing base classes during the training of additional novel classes.
arXiv Detail & Related papers (2021-10-28T10:57:25Z) - Few-shot Action Recognition with Prototype-centered Attentive Learning [88.10852114988829]
Prototype-centered Attentive Learning (PAL) model composed of two novel components.
First, a prototype-centered contrastive learning loss is introduced to complement the conventional query-centered learning objective.
Second, PAL integrates a attentive hybrid learning mechanism that can minimize the negative impacts of outliers.
arXiv Detail & Related papers (2021-01-20T11:48:12Z) - Shot in the Dark: Few-Shot Learning with No Base-Class Labels [32.96824710484196]
We show that off-the-shelf self-supervised learning outperforms transductive few-shot methods by 3.9% for 5-shot accuracy on miniImageNet.
This motivates us to examine more carefully the role of features learned through self-supervision in few-shot learning.
arXiv Detail & Related papers (2020-10-06T02:05:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.