Category Adaptation Meets Projected Distillation in Generalized Continual Category Discovery
- URL: http://arxiv.org/abs/2308.12112v4
- Date: Thu, 25 Jul 2024 11:49:54 GMT
- Title: Category Adaptation Meets Projected Distillation in Generalized Continual Category Discovery
- Authors: Grzegorz Rypeść, Daniel Marczak, Sebastian Cygert, Tomasz Trzciński, Bartłomiej Twardowski,
- Abstract summary: Generalized Continual Category Discovery (GCCD) tackles learning from sequentially arriving, partially labeled datasets.
We introduce a novel technique integrating a learnable projector with feature distillation, thus enhancing model adaptability without sacrificing past knowledge.
We demonstrate that while each component offers modest benefits individually, their combination - dubbed CAMP - significantly improves the balance between learning new information and retaining old.
- Score: 0.9349784561232036
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generalized Continual Category Discovery (GCCD) tackles learning from sequentially arriving, partially labeled datasets while uncovering new categories. Traditional methods depend on feature distillation to prevent forgetting the old knowledge. However, this strategy restricts the model's ability to adapt and effectively distinguish new categories. To address this, we introduce a novel technique integrating a learnable projector with feature distillation, thus enhancing model adaptability without sacrificing past knowledge. The resulting distribution shift of the previously learned categories is mitigated with the auxiliary category adaptation network. We demonstrate that while each component offers modest benefits individually, their combination - dubbed CAMP (Category Adaptation Meets Projected distillation) - significantly improves the balance between learning new information and retaining old. CAMP exhibits superior performance across several GCCD and Class Incremental Learning scenarios. The code is available at https://github.com/grypesc/CAMP.
Related papers
- Class incremental learning with probability dampening and cascaded gated classifier [4.285597067389559]
We propose a novel incremental regularisation approach called Margin Dampening and Cascaded Scaling.
The first combines a soft constraint and a knowledge distillation approach to preserve past knowledge while allowing forgetting new patterns.
We empirically show that our approach performs well on multiple benchmarks well-established baselines.
arXiv Detail & Related papers (2024-02-02T09:33:07Z) - Few-Shot Class-Incremental Learning via Training-Free Prototype
Calibration [67.69532794049445]
We find a tendency for existing methods to misclassify the samples of new classes into base classes, which leads to the poor performance of new classes.
We propose a simple yet effective Training-frEE calibratioN (TEEN) strategy to enhance the discriminability of new classes.
arXiv Detail & Related papers (2023-12-08T18:24:08Z) - TLCE: Transfer-Learning Based Classifier Ensembles for Few-Shot
Class-Incremental Learning [5.753740302289126]
Few-shot class-incremental learning (FSCIL) struggles to incrementally recognize novel classes from few examples.
We propose TLCE, which ensembles multiple pre-trained models to improve separation of novel and old classes.
arXiv Detail & Related papers (2023-12-07T11:16:00Z) - Dynamic Conceptional Contrastive Learning for Generalized Category
Discovery [76.82327473338734]
Generalized category discovery (GCD) aims to automatically cluster partially labeled data.
Unlabeled data contain instances that are not only from known categories of the labeled data but also from novel categories.
One effective way for GCD is applying self-supervised learning to learn discriminate representation for unlabeled data.
We propose a Dynamic Conceptional Contrastive Learning framework, which can effectively improve clustering accuracy.
arXiv Detail & Related papers (2023-03-30T14:04:39Z) - Class-Incremental Learning: A Survey [84.30083092434938]
Class-Incremental Learning (CIL) enables the learner to incorporate the knowledge of new classes incrementally.
CIL tends to catastrophically forget the characteristics of former ones, and its performance drastically degrades.
We provide a rigorous and unified evaluation of 17 methods in benchmark image classification tasks to find out the characteristics of different algorithms.
arXiv Detail & Related papers (2023-02-07T17:59:05Z) - RECALL: Rehearsal-free Continual Learning for Object Classification [24.260824374268314]
Convolutional neural networks show remarkable results in classification but struggle with learning new things on the fly.
We present a novel rehearsal-free approach, where a deep neural network is continually learning new unseen object categories.
Our approach is called RECALL, as the network recalls categories by calculating logits for old categories before training new ones.
arXiv Detail & Related papers (2022-09-29T13:36:28Z) - Class-incremental Novel Class Discovery [76.35226130521758]
We study the new task of class-incremental Novel Class Discovery (class-iNCD)
We propose a novel approach for class-iNCD which prevents forgetting of past information about the base classes.
Our experiments, conducted on three common benchmarks, demonstrate that our method significantly outperforms state-of-the-art approaches.
arXiv Detail & Related papers (2022-07-18T13:49:27Z) - Exemplar-free Class Incremental Learning via Discriminative and
Comparable One-class Classifiers [12.121885324463388]
We propose a new framework, named Discriminative and Comparable One-class classifiers for Incremental Learning (DisCOIL)
DisCOIL follows the basic principle of POC, but it adopts variational auto-encoders (VAE) instead of other well-established one-class classifiers (e.g. deep SVDD)
With this advantage, DisCOIL trains a new-class VAE in contrast with the old-class VAEs, which forces the new-class VAE to reconstruct better for new-class samples but worse for the old-class pseudo samples, thus enhancing the
arXiv Detail & Related papers (2022-01-05T07:16:34Z) - Long-tail Recognition via Compositional Knowledge Transfer [60.03764547406601]
We introduce a novel strategy for long-tail recognition that addresses the tail classes' few-shot problem.
Our objective is to transfer knowledge acquired from information-rich common classes to semantically similar, and yet data-hungry, rare classes.
Experiments show that our approach can achieve significant performance boosts on rare classes while maintaining robust common class performance.
arXiv Detail & Related papers (2021-12-13T15:48:59Z) - Adaptive Class Suppression Loss for Long-Tail Object Detection [49.7273558444966]
We devise a novel Adaptive Class Suppression Loss (ACSL) to improve the detection performance of tail categories.
Our ACSL achieves 5.18% and 5.2% improvements with ResNet50-FPN, and sets a new state of the art.
arXiv Detail & Related papers (2021-04-02T05:12:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.