PartCo: Part-Level Correspondence Priors Enhance Category Discovery
- URL: http://arxiv.org/abs/2509.22769v1
- Date: Fri, 26 Sep 2025 17:59:16 GMT
- Title: PartCo: Part-Level Correspondence Priors Enhance Category Discovery
- Authors: Fernando Julio Cendra, Kai Han,
- Abstract summary: Generalized Category Discovery (GCD) aims to identify both known and novel categories within unlabeled data.<n>PartCo is a novel framework that enhances category discovery by incorporating part-level visual feature correspondences.<n>Our experiments on multiple benchmark datasets demonstrate that PartCo significantly improves the performance of current GCD approaches.
- Score: 60.37931246658181
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Generalized Category Discovery (GCD) aims to identify both known and novel categories within unlabeled data by leveraging a set of labeled examples from known categories. Existing GCD methods primarily depend on semantic labels and global image representations, often overlooking the detailed part-level cues that are crucial for distinguishing closely related categories. In this paper, we introduce PartCo, short for Part-Level Correspondence Prior, a novel framework that enhances category discovery by incorporating part-level visual feature correspondences. By leveraging part-level relationships, PartCo captures finer-grained semantic structures, enabling a more nuanced understanding of category relationships. Importantly, PartCo seamlessly integrates with existing GCD methods without requiring significant modifications. Our extensive experiments on multiple benchmark datasets demonstrate that PartCo significantly improves the performance of current GCD approaches, achieving state-of-the-art results by bridging the gap between semantic labels and part-level visual compositions, thereby setting new benchmarks for GCD. Project page: https://visual-ai.github.io/partco
Related papers
- Consistent Supervised-Unsupervised Alignment for Generalized Category Discovery [49.67913741459179]
Generalized Category Discovery (GCD) focuses on classifying known categories while simultaneously discovering novel categories from unlabeled data.<n>Previous GCD methods face challenges due to inconsistent optimization objectives and category confusion.<n>We propose the Neural Collapse-inspired Generalized Category Discovery (NC-GCD) framework.
arXiv Detail & Related papers (2025-07-07T07:34:41Z) - Generalized Category Discovery in Event-Centric Contexts: Latent Pattern Mining with LLMs [34.06878654462158]
We introduce Event-Centric GCD, characterized by long, complex narratives and highly imbalanced class distributions.<n>We propose PaMA, a framework leveraging LLMs to extract and refine event patterns for improved cluster-class alignment.<n> Evaluations on two EC-GCD benchmarks, including a newly constructed Scam Report dataset, demonstrate that PaMA outperforms prior methods with up to 12.58% H-score gains.
arXiv Detail & Related papers (2025-05-29T10:02:04Z) - Contextuality Helps Representation Learning for Generalized Category Discovery [5.885208652383516]
This paper introduces a novel approach to Generalized Category Discovery (GCD) by leveraging the concept of contextuality.
Our model integrates two levels of contextuality: instance-level, where nearest-neighbor contexts are utilized for contrastive learning, and cluster-level, employing contrastive learning.
The integration of the contextual information effectively improves the feature learning and thereby the classification accuracy of all categories.
arXiv Detail & Related papers (2024-07-29T07:30:41Z) - Textual Knowledge Matters: Cross-Modality Co-Teaching for Generalized Visual Class Discovery [65.16724941038052]
Generalized Category Discovery (GCD) aims to cluster unlabeled data from both known and unknown categories.<n>Current GCD methods rely on only visual cues, which neglect the multi-modality perceptive nature of human cognitive processes in discovering novel visual categories.<n>We propose a two-phase TextGCD framework to accomplish multi-modality GCD by exploiting powerful Visual-Language Models.
arXiv Detail & Related papers (2024-03-12T07:06:50Z) - CLIP-GCD: Simple Language Guided Generalized Category Discovery [21.778676607030253]
Generalized Category Discovery (GCD) requires a model to both classify known categories and cluster unknown categories in unlabeled data.
Prior methods leveraged self-supervised pre-training combined with supervised fine-tuning on the labeled data, followed by simple clustering methods.
We propose to leverage multi-modal (vision and language) models, in two complementary ways.
arXiv Detail & Related papers (2023-05-17T17:55:33Z) - Dynamic Conceptional Contrastive Learning for Generalized Category
Discovery [76.82327473338734]
Generalized category discovery (GCD) aims to automatically cluster partially labeled data.
Unlabeled data contain instances that are not only from known categories of the labeled data but also from novel categories.
One effective way for GCD is applying self-supervised learning to learn discriminate representation for unlabeled data.
We propose a Dynamic Conceptional Contrastive Learning framework, which can effectively improve clustering accuracy.
arXiv Detail & Related papers (2023-03-30T14:04:39Z) - Open-world Semantic Segmentation via Contrasting and Clustering
Vision-Language Embedding [95.78002228538841]
We propose a new open-world semantic segmentation pipeline that makes the first attempt to learn to segment semantic objects of various open-world categories without any efforts on dense annotations.
Our method can directly segment objects of arbitrary categories, outperforming zero-shot segmentation methods that require data labeling on three benchmark datasets.
arXiv Detail & Related papers (2022-07-18T09:20:04Z) - Generalized Few-shot Semantic Segmentation [68.69434831359669]
We introduce a new benchmark called Generalized Few-Shot Semantic (GFS-Seg) to analyze the ability of simultaneously segmenting the novel categories.
It is the first study showing that previous representative state-of-the-art generalizations fall short in GFS-Seg.
We propose the Context-Aware Prototype Learning (CAPL) that significantly improves performance by 1) leveraging the co-occurrence prior knowledge from support samples, and 2) dynamically enriching contextual information to the conditioned, on the content of each query image.
arXiv Detail & Related papers (2020-10-11T10:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.