Dynamic Conceptional Contrastive Learning for Generalized Category
Discovery
- URL: http://arxiv.org/abs/2303.17393v1
- Date: Thu, 30 Mar 2023 14:04:39 GMT
- Title: Dynamic Conceptional Contrastive Learning for Generalized Category
Discovery
- Authors: Nan Pu, Zhun Zhong and Nicu Sebe
- Abstract summary: Generalized category discovery (GCD) aims to automatically cluster partially labeled data.
Unlabeled data contain instances that are not only from known categories of the labeled data but also from novel categories.
One effective way for GCD is applying self-supervised learning to learn discriminate representation for unlabeled data.
We propose a Dynamic Conceptional Contrastive Learning framework, which can effectively improve clustering accuracy.
- Score: 76.82327473338734
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized category discovery (GCD) is a recently proposed open-world
problem, which aims to automatically cluster partially labeled data. The main
challenge is that the unlabeled data contain instances that are not only from
known categories of the labeled data but also from novel categories. This leads
traditional novel category discovery (NCD) methods to be incapacitated for GCD,
due to their assumption of unlabeled data are only from novel categories. One
effective way for GCD is applying self-supervised learning to learn
discriminate representation for unlabeled data. However, this manner largely
ignores underlying relationships between instances of the same concepts (e.g.,
class, super-class, and sub-class), which results in inferior representation
learning. In this paper, we propose a Dynamic Conceptional Contrastive Learning
(DCCL) framework, which can effectively improve clustering accuracy by
alternately estimating underlying visual conceptions and learning conceptional
representation. In addition, we design a dynamic conception generation and
update mechanism, which is able to ensure consistent conception learning and
thus further facilitate the optimization of DCCL. Extensive experiments show
that DCCL achieves new state-of-the-art performances on six generic and
fine-grained visual recognition datasets, especially on fine-grained ones. For
example, our method significantly surpasses the best competitor by 16.2% on the
new classes for the CUB-200 dataset. Code is available at
https://github.com/TPCD/DCCL.
Related papers
- Prototypical Hash Encoding for On-the-Fly Fine-Grained Category Discovery [65.16724941038052]
Category-aware Prototype Generation (CPG) and Discrimi Category 5.3% (DCE) are proposed.
CPG enables the model to fully capture the intra-category diversity by representing each category with multiple prototypes.
DCE boosts the discrimination ability of hash code with the guidance of the generated category prototypes.
arXiv Detail & Related papers (2024-10-24T23:51:40Z) - Happy: A Debiased Learning Framework for Continual Generalized Category Discovery [54.54153155039062]
This paper explores the underexplored task of Continual Generalized Category Discovery (C-GCD)
C-GCD aims to incrementally discover new classes from unlabeled data while maintaining the ability to recognize previously learned classes.
We introduce a debiased learning framework, namely Happy, characterized by Hardness-aware prototype sampling and soft entropy regularization.
arXiv Detail & Related papers (2024-10-09T04:18:51Z) - Beyond Known Clusters: Probe New Prototypes for Efficient Generalized Class Discovery [23.359450657842686]
Generalized Class Discovery (GCD) aims to dynamically assign labels to unlabelled data partially based on knowledge learned from labelled data.
We propose an adaptive probing mechanism that introduces learnable potential prototypes to expand cluster prototypes.
Our method surpasses the nearest competitor by a significant margin of 9.7% within the Stanford Cars dataset.
arXiv Detail & Related papers (2024-04-13T12:41:40Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - Parametric Classification for Generalized Category Discovery: A Baseline
Study [70.73212959385387]
Generalized Category Discovery (GCD) aims to discover novel categories in unlabelled datasets using knowledge learned from labelled samples.
We investigate the failure of parametric classifiers, verify the effectiveness of previous design choices when high-quality supervision is available, and identify unreliable pseudo-labels as a key problem.
We propose a simple yet effective parametric classification method that benefits from entropy regularisation, achieves state-of-the-art performance on multiple GCD benchmarks and shows strong robustness to unknown class numbers.
arXiv Detail & Related papers (2022-11-21T18:47:11Z) - XCon: Learning with Experts for Fine-grained Category Discovery [4.787507865427207]
We present a novel method called Expert-Contrastive Learning (XCon) to help the model to mine useful information from the images.
Experiments on fine-grained datasets show a clear improved performance over the previous best methods, indicating the effectiveness of our method.
arXiv Detail & Related papers (2022-08-03T08:03:12Z) - Novel Class Discovery in Semantic Segmentation [104.30729847367104]
We introduce a new setting of Novel Class Discovery in Semantic (NCDSS)
It aims at segmenting unlabeled images containing new classes given prior knowledge from a labeled set of disjoint classes.
In NCDSS, we need to distinguish the objects and background, and to handle the existence of multiple classes within an image.
We propose the Entropy-based Uncertainty Modeling and Self-training (EUMS) framework to overcome noisy pseudo-labels.
arXiv Detail & Related papers (2021-12-03T13:31:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.