Exploring the Tradeoff Between Diversity and Discrimination for Continuous Category Discovery
- URL: http://arxiv.org/abs/2508.11173v1
- Date: Fri, 15 Aug 2025 02:51:30 GMT
- Title: Exploring the Tradeoff Between Diversity and Discrimination for Continuous Category Discovery
- Authors: Ruobing Jiang, Yang Liu, Haobing Liu, Yanwei Yu, Chunyang Wang,
- Abstract summary: Continuous category discovery (CCD) aims to automatically discover novel categories in continuously arriving unlabeled data.<n>Most CCD methods cannot handle the contradiction between novel class discovery and classification well.<n>We propose Independence-based Diversity and Orthogonality-based Discrimination (IDOD)<n>Our method outperforms the state-of-the-art methods on challenging fine-grained datasets.
- Score: 15.22499403972592
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Continuous category discovery (CCD) aims to automatically discover novel categories in continuously arriving unlabeled data. This is a challenging problem considering that there is no number of categories and labels in the newly arrived data, while also needing to mitigate catastrophic forgetting. Most CCD methods cannot handle the contradiction between novel class discovery and classification well. They are also prone to accumulate errors in the process of gradually discovering novel classes. Moreover, most of them use knowledge distillation and data replay to prevent forgetting, occupying more storage space. To address these limitations, we propose Independence-based Diversity and Orthogonality-based Discrimination (IDOD). IDOD mainly includes independent enrichment of diversity module, joint discovery of novelty module, and continuous increment by orthogonality module. In independent enrichment, the backbone is trained separately using contrastive loss to avoid it focusing only on features for classification. Joint discovery transforms multi-stage novel class discovery into single-stage, reducing error accumulation impact. Continuous increment by orthogonality module generates mutually orthogonal prototypes for classification and prevents forgetting with lower space overhead via representative representation replay. Experimental results show that on challenging fine-grained datasets, our method outperforms the state-of-the-art methods.
Related papers
- Generate, Refine, and Encode: Leveraging Synthesized Novel Samples for On-the-Fly Fine-Grained Category Discovery [64.83837781610907]
We investigate the online identification of newly arriving stream data that may belong to both known and unknown categories.<n>Existing OCD methods are devoted to fully mining transferable knowledge from only labeled data.<n>We propose a diffusion-based OCD framework, dubbed DiffGRE, which integrates attribute-composition generation, Refinement, and supervised recognition.
arXiv Detail & Related papers (2025-07-05T14:20:49Z) - Generalized Semantic Contrastive Learning via Embedding Side Information for Few-Shot Object Detection [52.490375806093745]
The objective of few-shot object detection (FSOD) is to detect novel objects with few training samples.<n>We introduce the side information to alleviate the negative influences derived from the feature space and sample viewpoints.<n>Our model outperforms the previous state-of-the-art methods, significantly improving the ability of FSOD in most shots/splits.
arXiv Detail & Related papers (2025-04-09T17:24:05Z) - Freeze and Cluster: A Simple Baseline for Rehearsal-Free Continual Category Discovery [13.68907640197364]
This paper addresses the problem of Rehearsal-Free Continual Category Discovery (RF-CCD)<n>RF-CCD focuses on continuously identifying novel class by leveraging knowledge from labeled data.<n>Previous approaches have struggled to effectively integrate advanced techniques from both domains.
arXiv Detail & Related papers (2025-03-12T06:46:32Z) - Class Incremental Fault Diagnosis under Limited Fault Data via Supervised Contrastive Knowledge Distillation [9.560742599396411]
Class-incremental fault diagnosis requires a model to adapt to new fault classes while retaining previous knowledge.<n>We introduce a Supervised Contrastive knowledge distiLlation for class Incremental Fault Diagnosis framework.
arXiv Detail & Related papers (2025-01-16T13:20:29Z) - Learning Invariant Molecular Representation in Latent Discrete Space [52.13724532622099]
We propose a new framework for learning molecular representations that exhibit invariance and robustness against distribution shifts.
Our model achieves stronger generalization against state-of-the-art baselines in the presence of various distribution shifts.
arXiv Detail & Related papers (2023-10-22T04:06:44Z) - Dynamic Conceptional Contrastive Learning for Generalized Category
Discovery [76.82327473338734]
Generalized category discovery (GCD) aims to automatically cluster partially labeled data.
Unlabeled data contain instances that are not only from known categories of the labeled data but also from novel categories.
One effective way for GCD is applying self-supervised learning to learn discriminate representation for unlabeled data.
We propose a Dynamic Conceptional Contrastive Learning framework, which can effectively improve clustering accuracy.
arXiv Detail & Related papers (2023-03-30T14:04:39Z) - Few-shot Object Detection with Refined Contrastive Learning [4.520231308678286]
We propose a novel few-shot object detection (FSOD) method with Refined Contrastive Learning (FSRC)
A pre-determination component is introduced to find out the Resemblance Group from novel classes which contains confusable classes.
RCL is pointedly performed on this group of classes in order to increase the inter-class distances among them.
arXiv Detail & Related papers (2022-11-24T09:34:20Z) - Spacing Loss for Discovering Novel Categories [72.52222295216062]
Novel Class Discovery (NCD) is a learning paradigm, where a machine learning model is tasked to semantically group instances from unlabeled data.
We first characterize existing NCD approaches into single-stage and two-stage methods based on whether they require access to labeled and unlabeled data together.
We devise a simple yet powerful loss function that enforces separability in the latent space using cues from multi-dimensional scaling.
arXiv Detail & Related papers (2022-04-22T09:37:11Z) - Weakly-Supervised Cross-Domain Adaptation for Endoscopic Lesions
Segmentation [79.58311369297635]
We propose a new weakly-supervised lesions transfer framework, which can explore transferable domain-invariant knowledge across different datasets.
A Wasserstein quantified transferability framework is developed to highlight widerange transferable contextual dependencies.
A novel self-supervised pseudo label generator is designed to equally provide confident pseudo pixel labels for both hard-to-transfer and easy-to-transfer target samples.
arXiv Detail & Related papers (2020-12-08T02:26:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.