Joint Input and Output Coordination for Class-Incremental Learning
- URL: http://arxiv.org/abs/2409.05620v1
- Date: Mon, 9 Sep 2024 13:55:07 GMT
- Title: Joint Input and Output Coordination for Class-Incremental Learning
- Authors: Shuai Wang, Yibing Zhan, Yong Luo, Han Hu, Wei Yu, Yonggang Wen, Dacheng Tao,
- Abstract summary: We propose a joint input and output coordination (JIOC) mechanism to address these issues.
This mechanism assigns different weights to different categories of data according to the gradient of the output score.
It can be incorporated into different incremental learning approaches that use memory storage.
- Score: 84.36763449830812
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incremental learning is nontrivial due to severe catastrophic forgetting. Although storing a small amount of data on old tasks during incremental learning is a feasible solution, current strategies still do not 1) adequately address the class bias problem, and 2) alleviate the mutual interference between new and old tasks, and 3) consider the problem of class bias within tasks. This motivates us to propose a joint input and output coordination (JIOC) mechanism to address these issues. This mechanism assigns different weights to different categories of data according to the gradient of the output score, and uses knowledge distillation (KD) to reduce the mutual interference between the outputs of old and new tasks. The proposed mechanism is general and flexible, and can be incorporated into different incremental learning approaches that use memory storage. Extensive experiments show that our mechanism can significantly improve their performance.
Related papers
- Navigating Semantic Drift in Task-Agnostic Class-Incremental Learning [51.177789437682954]
Class-incremental learning (CIL) seeks to enable a model to sequentially learn new classes while retaining knowledge of previously learned ones.
Balancing flexibility and stability remains a significant challenge, particularly when the task ID is unknown.
We propose a novel semantic drift calibration method that incorporates mean shift compensation and covariance calibration.
arXiv Detail & Related papers (2025-02-11T13:57:30Z) - CSTA: Spatial-Temporal Causal Adaptive Learning for Exemplar-Free Video Class-Incremental Learning [62.69917996026769]
A class-incremental learning task requires learning and preserving both spatial appearance and temporal action involvement.
We propose a framework that equips separate adapters to learn new class patterns, accommodating the incremental information requirements unique to each class.
A causal compensation mechanism is proposed to reduce the conflicts during increment and memorization for between different types of information.
arXiv Detail & Related papers (2025-01-13T11:34:55Z) - Make Domain Shift a Catastrophic Forgetting Alleviator in Class-Incremental Learning [9.712093262192733]
We propose a simple yet effective method named DisCo to deal with class-incremental learning tasks.
DisCo can be easily integrated into existing state-of-the-art class-incremental learning methods.
Experimental results show that incorporating our method into various CIL methods achieves substantial performance improvements.
arXiv Detail & Related papers (2024-12-31T03:02:20Z) - Neural Collapse Terminus: A Unified Solution for Class Incremental
Learning and Its Variants [166.916517335816]
In this paper, we offer a unified solution to the misalignment dilemma in the three tasks.
We propose neural collapse terminus that is a fixed structure with the maximal equiangular inter-class separation for the whole label space.
Our method holds the neural collapse optimality in an incremental fashion regardless of data imbalance or data scarcity.
arXiv Detail & Related papers (2023-08-03T13:09:59Z) - Complementary Learning Subnetworks for Parameter-Efficient
Class-Incremental Learning [40.13416912075668]
We propose a rehearsal-free CIL approach that learns continually via the synergy between two Complementary Learning Subnetworks.
Our method achieves competitive results against state-of-the-art methods, especially in accuracy gain, memory cost, training efficiency, and task-order.
arXiv Detail & Related papers (2023-06-21T01:43:25Z) - Dense Network Expansion for Class Incremental Learning [61.00081795200547]
State-of-the-art approaches use a dynamic architecture based on network expansion (NE), in which a task expert is added per task.
A new NE method, dense network expansion (DNE), is proposed to achieve a better trade-off between accuracy and model complexity.
It outperforms the previous SOTA methods by a margin of 4% in terms of accuracy, with similar or even smaller model scale.
arXiv Detail & Related papers (2023-03-22T16:42:26Z) - Resolving Task Confusion in Dynamic Expansion Architectures for Class
Incremental Learning [27.872317837451977]
Task Correlated Incremental Learning (TCIL) is proposed to encourage discriminative and fair feature utilization across tasks.
TCIL performs a multi-level knowledge distillation to propagate knowledge learned from old tasks to the new one.
The results demonstrate that TCIL consistently achieves state-of-the-art accuracy.
arXiv Detail & Related papers (2022-12-29T12:26:44Z) - Complementary Calibration: Boosting General Continual Learning with
Collaborative Distillation and Self-Supervision [47.374412281270594]
General Continual Learning (GCL) aims at learning from non independent and identically distributed stream data.
We reveal that the relation and feature deviations are crucial problems for catastrophic forgetting.
We propose a Complementary (CoCa) framework by mining the complementary model's outputs and features.
arXiv Detail & Related papers (2021-09-03T06:35:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.