Taxonomic Class Incremental Learning
- URL: http://arxiv.org/abs/2304.05547v1
- Date: Wed, 12 Apr 2023 00:43:30 GMT
- Title: Taxonomic Class Incremental Learning
- Authors: Yuzhao Chen, Zonghuan Li, Zhiyuan Hu, Nuno Vasconcelos
- Abstract summary: We propose the Taxonomic Class Incremental Learning problem.
We unify existing approaches to CIL and taxonomic learning as parameter inheritance schemes.
Experiments on CIFAR-100 and ImageNet-100 show the effectiveness of the proposed TCIL method.
- Score: 57.08545061888821
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The problem of continual learning has attracted rising attention in recent
years. However, few works have questioned the commonly used learning setup,
based on a task curriculum of random class. This differs significantly from
human continual learning, which is guided by taxonomic curricula. In this work,
we propose the Taxonomic Class Incremental Learning (TCIL) problem. In TCIL,
the task sequence is organized based on a taxonomic class tree. We unify
existing approaches to CIL and taxonomic learning as parameter inheritance
schemes and introduce a new such scheme for the TCIL learning. This enables the
incremental transfer of knowledge from ancestor to descendant class of a class
taxonomy through parameter inheritance. Experiments on CIFAR-100 and
ImageNet-100 show the effectiveness of the proposed TCIL method, which
outperforms existing SOTA methods by 2% in terms of final accuracy on CIFAR-100
and 3% on ImageNet-100.
Related papers
- Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - Resolving Task Confusion in Dynamic Expansion Architectures for Class
Incremental Learning [27.872317837451977]
Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks.
TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one.
The results demonstrate that TCIL consistently achieves state-of-the-art accuracy.
arXiv Detail & Related papers (2022-12-29T12:26:44Z) - Rectification-based Knowledge Retention for Continual Learning [49.1447478254131]
Deep learning models suffer from catastrophic forgetting when trained in an incremental learning setting.
We propose a novel approach to address the task incremental learning problem, which involves training a model on new tasks that arrive in an incremental manner.
Our approach can be used in both the zero-shot and non zero-shot task incremental learning settings.
arXiv Detail & Related papers (2021-03-30T18:11:30Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Few-Shot Class-Incremental Learning [68.75462849428196]
We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
arXiv Detail & Related papers (2020-04-23T03:38:33Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z) - Cognitively-Inspired Model for Incremental Learning Using a Few Examples [11.193504036335503]
Incremental learning attempts to develop a classifier which learns continuously from a stream of data segregated into different classes.
Deep learning approaches suffer from catastrophic forgetting when learning classes incrementally, while most incremental learning approaches require a large amount of training data per class.
We propose a novel approach inspired by the concept learning model of the hippocampus and the neocortex that represents each image class as centroids.
arXiv Detail & Related papers (2020-02-27T19:52:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.