Resolving Task Confusion in Dynamic Expansion Architectures for Class
Incremental Learning
- URL: http://arxiv.org/abs/2212.14284v1
- Date: Thu, 29 Dec 2022 12:26:44 GMT
- Title: Resolving Task Confusion in Dynamic Expansion Architectures for Class
Incremental Learning
- Authors: Bingchen Huang, Zhineng Chen, Peng Zhou, Jiayin Chen, Zuxuan Wu
- Abstract summary: Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks.
TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one.
The results demonstrate that TCIL consistently achieves state-of-the-art accuracy.
- Score: 27.872317837451977
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The dynamic expansion architecture is becoming popular in class incremental
learning, mainly due to its advantages in alleviating catastrophic forgetting.
However, task confusion is not well assessed within this framework, e.g., the
discrepancy between classes of different tasks is not well learned (i.e.,
inter-task confusion, ITC), and certain priority is still given to the latest
class batch (i.e., old-new confusion, ONC). We empirically validate the side
effects of the two types of confusion. Meanwhile, a novel solution called Task
Correlated Incremental Learning (TCIL) is proposed to encourage discriminative
and fair feature utilization across tasks. TCIL performs a multi-level
knowledge distillation to propagate knowledge learned from old tasks to the new
one. It establishes information flow paths at both feature and logit levels,
enabling the learning to be aware of old classes. Besides, attention mechanism
and classifier re-scoring are applied to generate more fair classification
scores. We conduct extensive experiments on CIFAR100 and ImageNet100 datasets.
The results demonstrate that TCIL consistently achieves state-of-the-art
accuracy. It mitigates both ITC and ONC, while showing advantages in battle
with catastrophic forgetting even no rehearsal memory is reserved.
Related papers
- Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - Joint Input and Output Coordination for Class-Incremental Learning [84.36763449830812]
We propose a joint input and output coordination (JIOC) mechanism to address these issues.
This mechanism assigns different weights to different categories of data according to the gradient of the output score.
It can be incorporated into different incremental learning approaches that use memory storage.
arXiv Detail & Related papers (2024-09-09T13:55:07Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - MCF-VC: Mitigate Catastrophic Forgetting in Class-Incremental Learning
for Multimodal Video Captioning [10.95493493610559]
We propose a method to Mitigate Catastrophic Forgetting in class-incremental learning for multimodal Video Captioning (MCF-VC)
In order to better constrain the knowledge characteristics of old and new tasks at the specific feature level, we have created the Two-stage Knowledge Distillation (TsKD)
Our experiments on the public dataset MSR-VTT show that the proposed method significantly resists the forgetting of previous tasks without replaying old samples, and performs well on the new task.
arXiv Detail & Related papers (2024-02-27T16:54:08Z) - Taxonomic Class Incremental Learning [57.08545061888821]
We propose the Taxonomic Class Incremental Learning problem.
We unify existing approaches to CIL and taxonomic learning as parameter inheritance schemes.
Experiments on CIFAR-100 and ImageNet-100 show the effectiveness of the proposed TCIL method.
arXiv Detail & Related papers (2023-04-12T00:43:30Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - iTAML: An Incremental Task-Agnostic Meta-learning Approach [123.10294801296926]
Humans can continuously learn new knowledge as their experience grows.
Previous learning in deep neural networks can quickly fade out when they are trained on a new task.
We introduce a novel meta-learning approach that seeks to maintain an equilibrium between all encountered tasks.
arXiv Detail & Related papers (2020-03-25T21:42:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.