CLUSTSEG: Clustering for Universal Segmentation
- URL: http://arxiv.org/abs/2305.02187v2
- Date: Thu, 18 May 2023 15:34:53 GMT
- Title: CLUSTSEG: Clustering for Universal Segmentation
- Authors: James Liang, Tianfei Zhou, Dongfang Liu, Wenguan Wang
- Abstract summary: CLUSTSEG is a general, transformer-based framework for image segmentation.
It tackles different image segmentation tasks (i.e., superpixel, semantic, instance, and panoptic) through a unified neural clustering scheme.
- Score: 56.58677563046506
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present CLUSTSEG, a general, transformer-based framework that tackles
different image segmentation tasks (i.e., superpixel, semantic, instance, and
panoptic) through a unified neural clustering scheme. Regarding queries as
cluster centers, CLUSTSEG is innovative in two aspects:1) cluster centers are
initialized in heterogeneous ways so as to pointedly address task-specific
demands (e.g., instance- or category-level distinctiveness), yet without
modifying the architecture; and 2) pixel-cluster assignment, formalized in a
cross-attention fashion, is alternated with cluster center update, yet without
learning additional parameters. These innovations closely link CLUSTSEG to EM
clustering and make it a transparent and powerful framework that yields
superior results across the above segmentation tasks.
Related papers
- Improved Face Representation via Joint Label Classification and
Supervised Contrastive Clustering [5.874142059884521]
Face clustering tasks can learn hierarchical semantic information from large-scale data.
This paper proposes a joint optimization task of label classification and supervised contrastive clustering to introduce the cluster knowledge to the traditional face recognition task.
arXiv Detail & Related papers (2023-12-07T03:55:20Z) - Generalized Category Discovery with Clustering Assignment Consistency [56.92546133591019]
Generalized category discovery (GCD) is a recently proposed open-world task.
We propose a co-training-based framework that encourages clustering consistency.
Our method achieves state-of-the-art performance on three generic benchmarks and three fine-grained visual recognition datasets.
arXiv Detail & Related papers (2023-10-30T00:32:47Z) - ClusterFormer: Clustering As A Universal Visual Learner [80.79669078819562]
CLUSTERFORMER is a universal vision model based on the CLUSTERing paradigm with TransFORMER.
It is capable of tackling heterogeneous vision tasks with varying levels of clustering granularity.
For its efficacy, we hope our work can catalyze a paradigm shift in universal models in computer vision.
arXiv Detail & Related papers (2023-09-22T22:12:30Z) - Deep Multi-View Subspace Clustering with Anchor Graph [11.291831842959926]
We propose a novel deep multi-view subspace clustering method with anchor graph (DMCAG)
DMCAG learns the embedded features for each view independently, which are used to obtain the subspace representations.
Our method achieves superior clustering performance over other state-of-the-art methods.
arXiv Detail & Related papers (2023-05-11T16:17:43Z) - DeepCut: Unsupervised Segmentation using Graph Neural Networks
Clustering [6.447863458841379]
This study introduces a lightweight Graph Neural Network (GNN) to replace classical clustering methods.
Unlike existing methods, our GNN takes both the pair-wise affinities between local image features and the raw features as input.
We demonstrate how classical clustering objectives can be formulated as self-supervised loss functions for training an image segmentation GNN.
arXiv Detail & Related papers (2022-12-12T12:31:46Z) - Self-supervised Contrastive Attributed Graph Clustering [110.52694943592974]
We propose a novel attributed graph clustering network, namely Self-supervised Contrastive Attributed Graph Clustering (SCAGC)
In SCAGC, by leveraging inaccurate clustering labels, a self-supervised contrastive loss, are designed for node representation learning.
For the OOS nodes, SCAGC can directly calculate their clustering labels.
arXiv Detail & Related papers (2021-10-15T03:25:28Z) - You Never Cluster Alone [150.94921340034688]
We extend the mainstream contrastive learning paradigm to a cluster-level scheme, where all the data subjected to the same cluster contribute to a unified representation.
We define a set of categorical variables as clustering assignment confidence, which links the instance-level learning track with the cluster-level one.
By reparametrizing the assignment variables, TCC is trained end-to-end, requiring no alternating steps.
arXiv Detail & Related papers (2021-06-03T14:59:59Z) - Learning to Cluster Faces via Confidence and Connectivity Estimation [136.5291151775236]
We propose a fully learnable clustering framework without requiring a large number of overlapped subgraphs.
Our method significantly improves clustering accuracy and thus performance of the recognition models trained on top, yet it is an order of magnitude more efficient than existing supervised methods.
arXiv Detail & Related papers (2020-04-01T13:39:37Z) - A Classification-Based Approach to Semi-Supervised Clustering with
Pairwise Constraints [5.639904484784126]
We introduce a network framework for semi-supervised clustering with pairwise constraints.
In contrast to existing approaches, we decompose SSC into two simpler classification tasks/stages.
The proposed approach, S3C2, is motivated by the observation that binary classification is usually easier than multi-class clustering.
arXiv Detail & Related papers (2020-01-18T20:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.