Decomposed Knowledge Distillation for Class-Incremental Semantic
Segmentation
- URL: http://arxiv.org/abs/2210.05941v1
- Date: Wed, 12 Oct 2022 06:15:51 GMT
- Title: Decomposed Knowledge Distillation for Class-Incremental Semantic
Segmentation
- Authors: Donghyeon Baek, Youngmin Oh, Sanghoon Lee, Junghyup Lee, Bumsub Ham
- Abstract summary: Class-incremental semantic segmentation (CISS) labels each pixel of an image with a corresponding object/stuff class continually.
It is crucial to learn novel classes incrementally without forgetting previously learned knowledge.
We introduce a CISS framework that alleviates the forgetting problem and facilitates learning novel classes effectively.
- Score: 34.460973847554364
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Class-incremental semantic segmentation (CISS) labels each pixel of an image
with a corresponding object/stuff class continually. To this end, it is crucial
to learn novel classes incrementally without forgetting previously learned
knowledge. Current CISS methods typically use a knowledge distillation (KD)
technique for preserving classifier logits, or freeze a feature extractor, to
avoid the forgetting problem. The strong constraints, however, prevent learning
discriminative features for novel classes. We introduce a CISS framework that
alleviates the forgetting problem and facilitates learning novel classes
effectively. We have found that a logit can be decomposed into two terms. They
quantify how likely an input belongs to a particular class or not, providing a
clue for a reasoning process of a model. The KD technique, in this context,
preserves the sum of two terms (i.e., a class logit), suggesting that each
could be changed and thus the KD does not imitate the reasoning process. To
impose constraints on each term explicitly, we propose a new decomposed
knowledge distillation (DKD) technique, improving the rigidity of a model and
addressing the forgetting problem more effectively. We also introduce a novel
initialization method to train new classifiers for novel classes. In CISS, the
number of negative training samples for novel classes is not sufficient to
discriminate old classes. To mitigate this, we propose to transfer knowledge of
negatives to the classifiers successively using an auxiliary classifier,
boosting the performance significantly. Experimental results on standard CISS
benchmarks demonstrate the effectiveness of our framework.
Related papers
- Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - PASS++: A Dual Bias Reduction Framework for Non-Exemplar Class-Incremental Learning [49.240408681098906]
Class-incremental learning (CIL) aims to recognize new classes incrementally while maintaining the discriminability of old classes.
Most existing CIL methods are exemplar-based, i.e., storing a part of old data for retraining.
We present a simple and novel dual bias reduction framework that employs self-supervised transformation (SST) in input space and prototype augmentation (protoAug) in deep feature space.
arXiv Detail & Related papers (2024-07-19T05:03:16Z) - Towards Non-Exemplar Semi-Supervised Class-Incremental Learning [33.560003528712414]
Class-incremental learning aims to gradually recognize new classes while maintaining the discriminability of old ones.
We propose a non-exemplar semi-supervised CIL framework with contrastive learning and semi-supervised incremental prototype classifier (Semi-IPC)
Semi-IPC learns a prototype for each class with unsupervised regularization, enabling the model to incrementally learn from partially labeled new data.
arXiv Detail & Related papers (2024-03-27T06:28:19Z) - Enhancing Visual Continual Learning with Language-Guided Supervision [76.38481740848434]
Continual learning aims to empower models to learn new tasks without forgetting previously acquired knowledge.
We argue that the scarce semantic information conveyed by the one-hot labels hampers the effective knowledge transfer across tasks.
Specifically, we use PLMs to generate semantic targets for each class, which are frozen and serve as supervision signals.
arXiv Detail & Related papers (2024-03-24T12:41:58Z) - A Hard-to-Beat Baseline for Training-free CLIP-based Adaptation [121.0693322732454]
Contrastive Language-Image Pretraining (CLIP) has gained popularity for its remarkable zero-shot capacity.
Recent research has focused on developing efficient fine-tuning methods to enhance CLIP's performance in downstream tasks.
We revisit a classical algorithm, Gaussian Discriminant Analysis (GDA), and apply it to the downstream classification of CLIP.
arXiv Detail & Related papers (2024-02-06T15:45:27Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - SSUL: Semantic Segmentation with Unknown Label for Exemplar-based
Class-Incremental Learning [19.152041362805985]
We consider a class-incremental semantic segmentation (CISS) problem.
We propose a new method, dubbed as SSUL-M (Semantic with Unknown Label with Memory), by carefully combining several techniques tailored for semantic segmentation.
We show our method achieves significantly better performance than the recent state-of-the-art baselines on the standard benchmark datasets.
arXiv Detail & Related papers (2021-06-22T06:40:26Z) - ClaRe: Practical Class Incremental Learning By Remembering Previous
Class Representations [9.530976792843495]
Class Incremental Learning (CIL) tends to learn new concepts perfectly, but not at the expense of performance and accuracy for old data.
ClaRe is an efficient solution for CIL by remembering the representations of learned classes in each increment.
ClaRe has a better generalization than prior methods thanks to producing diverse instances from the distribution of previously learned classes.
arXiv Detail & Related papers (2021-03-29T10:39:42Z) - Class-incremental Learning with Rectified Feature-Graph Preservation [24.098892115785066]
A central theme of this paper is to learn new classes that arrive in sequential phases over time.
We propose a weighted-Euclidean regularization for old knowledge preservation.
We show how it can work with binary cross-entropy to increase class separation for effective learning of new classes.
arXiv Detail & Related papers (2020-12-15T07:26:04Z) - Learning Adaptive Embedding Considering Incremental Class [55.21855842960139]
Class-Incremental Learning (CIL) aims to train a reliable model with the streaming data, which emerges unknown classes sequentially.
Different from traditional closed set learning, CIL has two main challenges: 1) Novel class detection.
After the novel classes are detected, the model needs to be updated without re-training using entire previous data.
arXiv Detail & Related papers (2020-08-31T04:11:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.