Learning Semi-supervised Gaussian Mixture Models for Generalized
Category Discovery
- URL: http://arxiv.org/abs/2305.06144v2
- Date: Thu, 17 Aug 2023 14:12:37 GMT
- Title: Learning Semi-supervised Gaussian Mixture Models for Generalized
Category Discovery
- Authors: Bingchen Zhao, Xin Wen, Kai Han
- Abstract summary: We propose an EM-like framework that alternates between representation learning and class number estimation.
We evaluate our framework on both generic image classification datasets and challenging fine-grained object recognition datasets.
- Score: 36.01459228175808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we address the problem of generalized category discovery
(GCD), \ie, given a set of images where part of them are labelled and the rest
are not, the task is to automatically cluster the images in the unlabelled
data, leveraging the information from the labelled data, while the unlabelled
data contain images from the labelled classes and also new ones. GCD is similar
to semi-supervised learning (SSL) but is more realistic and challenging, as SSL
assumes all the unlabelled images are from the same classes as the labelled
ones. We also do not assume the class number in the unlabelled data is known
a-priori, making the GCD problem even harder. To tackle the problem of GCD
without knowing the class number, we propose an EM-like framework that
alternates between representation learning and class number estimation. We
propose a semi-supervised variant of the Gaussian Mixture Model (GMM) with a
stochastic splitting and merging mechanism to dynamically determine the
prototypes by examining the cluster compactness and separability. With these
prototypes, we leverage prototypical contrastive learning for representation
learning on the partially labelled data subject to the constraints imposed by
the labelled data. Our framework alternates between these two steps until
convergence. The cluster assignment for an unlabelled instance can then be
retrieved by identifying its nearest prototype. We comprehensively evaluate our
framework on both generic image classification datasets and challenging
fine-grained object recognition datasets, achieving state-of-the-art
performance.
Related papers
- Contrastive Mean-Shift Learning for Generalized Category Discovery [45.19923199324919]
We address the problem of generalized category discovery (GCD)
We revisit the mean-shift algorithm, i.e., a powerful technique for mode seeking, and incorporate it into a contrastive learning framework.
The proposed method, dubbed Contrastive Mean-Shift (CMS) learning, trains an image encoder to produce representations with better clustering properties.
arXiv Detail & Related papers (2024-04-15T04:31:24Z) - No Representation Rules Them All in Category Discovery [115.53747187400626]
We tackle the problem of Generalized Category Discovery (GCD)
Given a dataset with labelled and unlabelled images, the task is to cluster all images in the unlabelled subset.
We present a synthetic dataset, named 'Clevr-4', for category discovery.
arXiv Detail & Related papers (2023-11-28T18:59:46Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - CiPR: An Efficient Framework with Cross-instance Positive Relations for Generalized Category Discovery [21.380021266251426]
generalized category discovery (GCD) considers the open-world problem of automatically clustering a partially labelled dataset.
In this paper, we address the GCD problem with an unknown category number for the unlabelled data.
We propose a framework, named CiPR, to bootstrap the representation by exploiting Cross-instance Positive Relations.
arXiv Detail & Related papers (2023-04-14T05:25:52Z) - AutoNovel: Automatically Discovering and Learning Novel Visual
Categories [138.80332861066287]
We present a new approach called AutoNovel to tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We evaluate AutoNovel on standard classification benchmarks and substantially outperform current methods for novel category discovery.
arXiv Detail & Related papers (2021-06-29T11:12:16Z) - Joint Visual and Temporal Consistency for Unsupervised Domain Adaptive
Person Re-Identification [64.37745443119942]
This paper jointly enforces visual and temporal consistency in the combination of a local one-hot classification and a global multi-class classification.
Experimental results on three large-scale ReID datasets demonstrate the superiority of proposed method in both unsupervised and unsupervised domain adaptive ReID tasks.
arXiv Detail & Related papers (2020-07-21T14:31:27Z) - Unsupervised Person Re-identification via Softened Similarity Learning [122.70472387837542]
Person re-identification (re-ID) is an important topic in computer vision.
This paper studies the unsupervised setting of re-ID, which does not require any labeled information.
Experiments on two image-based and video-based datasets demonstrate state-of-the-art performance.
arXiv Detail & Related papers (2020-04-07T17:16:41Z) - Automatically Discovering and Learning New Visual Categories with
Ranking Statistics [145.89790963544314]
We tackle the problem of discovering novel classes in an image collection given labelled examples of other classes.
We learn a general-purpose clustering model and use the latter to identify the new classes in the unlabelled data.
We evaluate our approach on standard classification benchmarks and outperform current methods for novel category discovery by a significant margin.
arXiv Detail & Related papers (2020-02-13T18:53:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.