Slightly Shift New Classes to Remember Old Classes for Video Class-Incremental Learning
- URL: http://arxiv.org/abs/2404.00901v1
- Date: Mon, 1 Apr 2024 03:58:51 GMT
- Title: Slightly Shift New Classes to Remember Old Classes for Video Class-Incremental Learning
- Authors: Jian Jiao, Yu Dai, Hefei Mei, Heqian Qiu, Chuanyang Gong, Shiyuan Tang, Xinpeng Hao, Hongliang Li,
- Abstract summary: We propose SNRO, which slightly shifts the features of new classes to remember old classes.
ES decimates at a lower sample rate to build memory sets and uses to align those sparse frames in the future.
EB terminates the training at a small epoch, preventing the model from overstretching into the high-semantic space of the current task.
- Score: 14.199974986278438
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent video class-incremental learning usually excessively pursues the accuracy of the newly seen classes and relies on memory sets to mitigate catastrophic forgetting of the old classes. However, limited storage only allows storing a few representative videos. So we propose SNRO, which slightly shifts the features of new classes to remember old classes. Specifically, SNRO contains Examples Sparse(ES) and Early Break(EB). ES decimates at a lower sample rate to build memory sets and uses interpolation to align those sparse frames in the future. By this, SNRO stores more examples under the same memory consumption and forces the model to focus on low-semantic features which are harder to be forgotten. EB terminates the training at a small epoch, preventing the model from overstretching into the high-semantic space of the current task. Experiments on UCF101, HMDB51, and UESTC-MMEA-CL datasets show that SNRO performs better than other approaches while consuming the same memory consumption.
Related papers
- PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning [49.240408681098906]
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes.
Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining.
We present a simple and novel dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space.
arXiv Detail & Related papers (2024-07-19T05:03:16Z) - Expandable Subspace Ensemble for Pre-Trained Model-Based Class-Incremental Learning [65.57123249246358]
We propose ExpAndable Subspace Ensemble (EASE) for PTM-based CIL.
We train a distinct lightweight adapter module for each new task, aiming to create task-specific subspaces.
Our prototype complement strategy synthesizes old classes' new features without using any old class instance.
arXiv Detail & Related papers (2024-03-18T17:58:13Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - A Memory Transformer Network for Incremental Learning [64.0410375349852]
We study class-incremental learning, a training setup in which new classes of data are observed over time for the model to learn from.
Despite the straightforward problem formulation, the naive application of classification models to class-incremental learning results in the "catastrophic forgetting" of previously seen classes.
One of the most successful existing methods has been the use of a memory of exemplars, which overcomes the issue of catastrophic forgetting by saving a subset of past data into a memory bank and utilizing it to prevent forgetting when training future tasks.
arXiv Detail & Related papers (2022-10-10T08:27:28Z) - Memory Population in Continual Learning via Outlier Elimination [25.511380924335207]
Catastrophic forgetting, the phenomenon of forgetting previously learned tasks when learning a new one, is a major hurdle in developing continual learning algorithms.
A popular method to alleviate forgetting is to use a memory buffer, which stores a subset of previously learned task examples for use during training on new tasks.
This paper introduces Memory Outlier Elimination (MOE), a method for identifying and eliminating outliers in the memory buffer by choosing samples from label-homogeneous subpopulations.
arXiv Detail & Related papers (2022-07-04T00:09:33Z) - A Model or 603 Exemplars: Towards Memory-Efficient Class-Incremental
Learning [56.450090618578]
Class-Incremental Learning (CIL) aims to train a model with limited memory size to meet this requirement.
We show that when counting the model size into the total budget and comparing methods with aligned memory size, saving models do not consistently work.
We propose a simple yet effective baseline, denoted as MEMO for Memory-efficient Expandable MOdel.
arXiv Detail & Related papers (2022-05-26T08:24:01Z) - Few-Shot Class-Incremental Learning by Sampling Multi-Phase Tasks [59.12108527904171]
A model should recognize new classes and maintain discriminability over old classes.
The task of recognizing few-shot new classes without forgetting old classes is called few-shot class-incremental learning (FSCIL)
We propose a new paradigm for FSCIL based on meta-learning by LearnIng Multi-phase Incremental Tasks (LIMIT)
arXiv Detail & Related papers (2022-03-31T13:46:41Z) - Constrained Few-shot Class-incremental Learning [14.646083882851928]
Continually learning new classes from fresh data without forgetting previous knowledge of old classes is a very challenging research problem.
We propose C-FSCIL, which is architecturally composed of a frozen meta-learned feature extractor, a trainable fixed-size fully connected layer, and a rewritable dynamically growing memory.
C-FSCIL provides three update modes that offer a trade-off between accuracy and compute-memory cost of learning novel classes.
arXiv Detail & Related papers (2022-03-30T18:19:36Z) - ZS-IL: Looking Back on Learned ExperiencesFor Zero-Shot Incremental
Learning [9.530976792843495]
We propose an on-call transfer set to provide past experiences whenever a new class arises in the data stream.
ZS-IL demonstrates significantly better performance on the well-known datasets (CIFAR-10, Tiny-ImageNet) in both Task-IL and Class-IL settings.
arXiv Detail & Related papers (2021-03-22T22:43:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.