Knowledge Restore and Transfer for Multi-label Class-Incremental
Learning
- URL: http://arxiv.org/abs/2302.13334v3
- Date: Mon, 14 Aug 2023 14:35:02 GMT
- Title: Knowledge Restore and Transfer for Multi-label Class-Incremental
Learning
- Authors: Songlin Dong, Haoyu Luo, Yuhang He, Xing Wei, Yihong Gong
- Abstract summary: We propose a knowledge restore and transfer (KRT) framework for multi-label class-incremental learning (MLCIL)
KRT includes a dynamic pseudo-label (DPL) module to restore the old class knowledge and an incremental cross-attention(ICA) module to save session-specific knowledge and transfer old class knowledge to the new model sufficiently.
Experimental results on MS-COCO and PASCAL VOC datasets demonstrate the effectiveness of our method for improving recognition performance and mitigating forgetting.
- Score: 34.378828633726854
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current class-incremental learning research mainly focuses on single-label
classification tasks while multi-label class-incremental learning (MLCIL) with
more practical application scenarios is rarely studied. Although there have
been many anti-forgetting methods to solve the problem of catastrophic
forgetting in class-incremental learning, these methods have difficulty in
solving the MLCIL problem due to label absence and information dilution. In
this paper, we propose a knowledge restore and transfer (KRT) framework for
MLCIL, which includes a dynamic pseudo-label (DPL) module to restore the old
class knowledge and an incremental cross-attention(ICA) module to save
session-specific knowledge and transfer old class knowledge to the new model
sufficiently. Besides, we propose a token loss to jointly optimize the
incremental cross-attention module. Experimental results on MS-COCO and PASCAL
VOC datasets demonstrate the effectiveness of our method for improving
recognition performance and mitigating forgetting on multi-label
class-incremental learning tasks.
Related papers
- CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning [52.63674911541416]
Few-shot class-incremental learning (FSCIL) faces several challenges, such as overfitting and forgetting.
Our primary focus is representation learning on base classes to tackle the unique challenge of FSCIL.
We find that trying to secure the spread of features within a more confined feature space enables the learned representation to strike a better balance between transferability and discriminability.
arXiv Detail & Related papers (2024-10-08T02:23:16Z) - Versatile Incremental Learning: Towards Class and Domain-Agnostic Incremental Learning [16.318126586825734]
Incremental Learning (IL) aims to accumulate knowledge from sequential input tasks.
We consider a more challenging and realistic but under-explored IL scenario, named Versatile Incremental Learning (VIL)
We propose a simple yet effective IL framework, named Incremental with Shift cONtrol (ICON)
arXiv Detail & Related papers (2024-09-17T07:44:28Z) - Low-Rank Mixture-of-Experts for Continual Medical Image Segmentation [18.984447545932706]
"catastrophic forgetting" problem occurs when model forgets previously learned features when it is extended to new categories or tasks.
We propose a network by introducing the data-specific Mixture of Experts structure to handle the new tasks or categories.
We validate our method on both class-level and task-level continual learning challenges.
arXiv Detail & Related papers (2024-06-19T14:19:50Z) - Class-Incremental Few-Shot Event Detection [68.66116956283575]
This paper proposes a new task, called class-incremental few-shot event detection.
This task faces two problems, i.e., old knowledge forgetting and new class overfitting.
To solve these problems, this paper presents a novel knowledge distillation and prompt learning based method, called Prompt-KD.
arXiv Detail & Related papers (2024-04-02T09:31:14Z) - Mitigating Forgetting in Online Continual Learning via Contrasting
Semantically Distinct Augmentations [22.289830907729705]
Online continual learning (OCL) aims to enable model learning from a non-stationary data stream to continuously acquire new knowledge as well as retain the learnt one.
Main challenge comes from the "catastrophic forgetting" issue -- the inability to well remember the learnt knowledge while learning the new ones.
arXiv Detail & Related papers (2022-11-10T05:29:43Z) - A Multi-label Continual Learning Framework to Scale Deep Learning
Approaches for Packaging Equipment Monitoring [57.5099555438223]
We study multi-label classification in the continual scenario for the first time.
We propose an efficient approach that has a logarithmic complexity with regard to the number of tasks.
We validate our approach on a real-world multi-label Forecasting problem from the packaging industry.
arXiv Detail & Related papers (2022-08-08T15:58:39Z) - Active Refinement for Multi-Label Learning: A Pseudo-Label Approach [84.52793080276048]
Multi-label learning (MLL) aims to associate a given instance with its relevant labels from a set of concepts.
Previous works of MLL mainly focused on the setting where the concept set is assumed to be fixed.
Many real-world applications require introducing new concepts into the set to meet new demands.
arXiv Detail & Related papers (2021-09-29T19:17:05Z) - Incremental Embedding Learning via Zero-Shot Translation [65.94349068508863]
Current state-of-the-art incremental learning methods tackle catastrophic forgetting problem in traditional classification networks.
We propose a novel class-incremental method for embedding network, named as zero-shot translation class-incremental method (ZSTCI)
In addition, ZSTCI can easily be combined with existing regularization-based incremental learning methods to further improve performance of embedding networks.
arXiv Detail & Related papers (2020-12-31T08:21:37Z) - Self-Supervised Learning Aided Class-Incremental Lifelong Learning [17.151579393716958]
We study the issue of catastrophic forgetting in class-incremental learning (Class-IL)
In training procedure of Class-IL, as the model has no knowledge about following tasks, it would only extract features necessary for tasks learned so far, whose information is insufficient for joint classification.
We propose to combine self-supervised learning, which can provide effective representations without requiring labels, with Class-IL to partly get around this problem.
arXiv Detail & Related papers (2020-06-10T15:15:27Z) - Incremental Object Detection via Meta-Learning [77.55310507917012]
We propose a meta-learning approach that learns to reshape model gradients, such that information across incremental tasks is optimally shared.
In comparison to existing meta-learning methods, our approach is task-agnostic, allows incremental addition of new-classes and scales to high-capacity models for object detection.
arXiv Detail & Related papers (2020-03-17T13:40:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.