Few-Shot Class-Incremental Learning from an Open-Set Perspective
- URL: http://arxiv.org/abs/2208.00147v1
- Date: Sat, 30 Jul 2022 05:42:48 GMT
- Title: Few-Shot Class-Incremental Learning from an Open-Set Perspective
- Authors: Can Peng, Kun Zhao, Tianren Wang, Meng Li and Brian C. Lovell
- Abstract summary: We explore the important task of Few-Shot Class-Incremental Learning (FSCIL) and its extreme data scarcity condition of one-shot.
In ALICE, instead of the commonly used cross-entropy loss, we propose to use the angular penalty loss to obtain well-clustered features.
Experiments on benchmark datasets, including CIFAR100, miniImageNet, and CUB200, demonstrate the improved performance of ALICE.
- Score: 10.898784938875702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The continual appearance of new objects in the visual world poses
considerable challenges for current deep learning methods in real-world
deployments. The challenge of new task learning is often exacerbated by the
scarcity of data for the new categories due to rarity or cost. Here we explore
the important task of Few-Shot Class-Incremental Learning (FSCIL) and its
extreme data scarcity condition of one-shot. An ideal FSCIL model needs to
perform well on all classes, regardless of their presentation order or paucity
of data. It also needs to be robust to open-set real-world conditions and be
easily adapted to the new tasks that always arise in the field. In this paper,
we first reevaluate the current task setting and propose a more comprehensive
and practical setting for the FSCIL task. Then, inspired by the similarity of
the goals for FSCIL and modern face recognition systems, we propose our method
-- Augmented Angular Loss Incremental Classification or ALICE. In ALICE,
instead of the commonly used cross-entropy loss, we propose to use the angular
penalty loss to obtain well-clustered features. As the obtained features not
only need to be compactly clustered but also diverse enough to maintain
generalization for future incremental classes, we further discuss how class
augmentation, data augmentation, and data balancing affect classification
performance. Experiments on benchmark datasets, including CIFAR100,
miniImageNet, and CUB200, demonstrate the improved performance of ALICE over
the state-of-the-art FSCIL methods.
Related papers
- Adaptive Masking Enhances Visual Grounding [12.793586888511978]
We propose IMAGE, Interpretative MAsking with Gaussian radiation modEling, to enhance vocabulary grounding in low-shot learning scenarios.
We evaluate the efficacy of our approach on benchmark datasets, including COCO and ODinW, demonstrating its superior performance in zero-shot and few-shot tasks.
arXiv Detail & Related papers (2024-10-04T05:48:02Z) - TACLE: Task and Class-aware Exemplar-free Semi-supervised Class Incremental Learning [16.734025446561695]
We propose a novel TACLE framework to address the problem of exemplar-free semi-supervised class incremental learning.
In this scenario, at each new task, the model has to learn new classes from both labeled and unlabeled data.
In addition to leveraging the capabilities of pre-trained models, TACLE proposes a novel task-adaptive threshold.
arXiv Detail & Related papers (2024-07-10T20:46:35Z) - Learning Prompt with Distribution-Based Feature Replay for Few-Shot Class-Incremental Learning [56.29097276129473]
We propose a simple yet effective framework, named Learning Prompt with Distribution-based Feature Replay (LP-DiF)
To prevent the learnable prompt from forgetting old knowledge in the new session, we propose a pseudo-feature replay approach.
When progressing to a new session, pseudo-features are sampled from old-class distributions combined with training images of the current session to optimize the prompt.
arXiv Detail & Related papers (2024-01-03T07:59:17Z) - Learning Objective-Specific Active Learning Strategies with Attentive
Neural Processes [72.75421975804132]
Learning Active Learning (LAL) suggests to learn the active learning strategy itself, allowing it to adapt to the given setting.
We propose a novel LAL method for classification that exploits symmetry and independence properties of the active learning problem.
Our approach is based on learning from a myopic oracle, which gives our model the ability to adapt to non-standard objectives.
arXiv Detail & Related papers (2023-09-11T14:16:37Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - Few-Shot Incremental Learning with Continually Evolved Classifiers [46.278573301326276]
Few-shot class-incremental learning (FSCIL) aims to design machine learning algorithms that can continually learn new concepts from a few data points.
The difficulty lies in that limited data from new classes not only lead to significant overfitting issues but also exacerbate the notorious catastrophic forgetting problems.
We propose a Continually Evolved CIF ( CEC) that employs a graph model to propagate context information between classifiers for adaptation.
arXiv Detail & Related papers (2021-04-07T10:54:51Z) - Few-Shot Class-Incremental Learning [68.75462849428196]
We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
arXiv Detail & Related papers (2020-04-23T03:38:33Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.