Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
- URL: http://arxiv.org/abs/2310.18890v2
- Date: Sat, 16 Dec 2023 15:24:18 GMT
- Title: Towards Generalized Multi-stage Clustering: Multi-view Self-distillation
- Authors: Jiatai Wang, Zhiwei Xu, Xin Wang, Tao Li
- Abstract summary: Existing multi-stage clustering methods independently learn the salient features from multiple views and then perform the clustering task.
This paper proposes a novel multi-stage deep MVC framework where multi-view self-distillation (DistilMVC) is introduced to distill dark knowledge of label distribution.
- Score: 10.368796552760571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Existing multi-stage clustering methods independently learn the salient
features from multiple views and then perform the clustering task.
Particularly, multi-view clustering (MVC) has attracted a lot of attention in
multi-view or multi-modal scenarios. MVC aims at exploring common semantics and
pseudo-labels from multiple views and clustering in a self-supervised manner.
However, limited by noisy data and inadequate feature learning, such a
clustering paradigm generates overconfident pseudo-labels that mis-guide the
model to produce inaccurate predictions. Therefore, it is desirable to have a
method that can correct this pseudo-label mistraction in multi-stage clustering
to avoid the bias accumulation. To alleviate the effect of overconfident
pseudo-labels and improve the generalization ability of the model, this paper
proposes a novel multi-stage deep MVC framework where multi-view
self-distillation (DistilMVC) is introduced to distill dark knowledge of label
distribution. Specifically, in the feature subspace at different hierarchies,
we explore the common semantics of multiple views through contrastive learning
and obtain pseudo-labels by maximizing the mutual information between views.
Additionally, a teacher network is responsible for distilling pseudo-labels
into dark knowledge, supervising the student network and improving its
predictive capabilities to enhance the robustness. Extensive experiments on
real-world multi-view datasets show that our method has better clustering
performance than state-of-the-art methods.
Related papers
- CDIMC-net: Cognitive Deep Incomplete Multi-view Clustering Network [53.72046586512026]
We propose a novel incomplete multi-view clustering network, called Cognitive Deep Incomplete Multi-view Clustering Network (CDIMC-net)
It captures the high-level features and local structure of each view by incorporating the view-specific deep encoders and graph embedding strategy into a framework.
Based on the human cognition, i.e., learning from easy to hard, it introduces a self-paced strategy to select the most confident samples for model training.
arXiv Detail & Related papers (2024-03-28T15:45:03Z) - Self Supervised Correlation-based Permutations for Multi-View Clustering [7.972599673048582]
We propose an end-to-end deep learning-based MVC framework for general data.
Our approach involves learning meaningful fused data representations with a novel permutation-based canonical correlation objective.
We demonstrate the effectiveness of our model using ten MVC benchmark datasets.
arXiv Detail & Related papers (2024-02-26T08:08:30Z) - Incomplete Contrastive Multi-View Clustering with High-Confidence
Guiding [7.305817202715752]
We propose a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC)
Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem.
Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information.
arXiv Detail & Related papers (2023-12-14T07:28:41Z) - DealMVC: Dual Contrastive Calibration for Multi-view Clustering [78.54355167448614]
We propose a novel Dual contrastive calibration network for Multi-View Clustering (DealMVC)
We first design a fusion mechanism to obtain a global cross-view feature. Then, a global contrastive calibration loss is proposed by aligning the view feature similarity graph and the high-confidence pseudo-label graph.
During the training procedure, the interacted cross-view feature is jointly optimized at both local and global levels.
arXiv Detail & Related papers (2023-08-17T14:14:28Z) - Multi-View Class Incremental Learning [57.14644913531313]
Multi-view learning (MVL) has gained great success in integrating information from multiple perspectives of a dataset to improve downstream task performance.
This paper investigates a novel paradigm called multi-view class incremental learning (MVCIL), where a single model incrementally classifies new classes from a continual stream of views.
arXiv Detail & Related papers (2023-06-16T08:13:41Z) - Reliable Representations Learning for Incomplete Multi-View Partial Multi-Label Classification [78.15629210659516]
In this paper, we propose an incomplete multi-view partial multi-label classification network named RANK.
We break through the view-level weights inherent in existing methods and propose a quality-aware sub-network to dynamically assign quality scores to each view of each sample.
Our model is not only able to handle complete multi-view multi-label datasets, but also works on datasets with missing instances and labels.
arXiv Detail & Related papers (2023-03-30T03:09:25Z) - DICNet: Deep Instance-Level Contrastive Network for Double Incomplete
Multi-View Multi-Label Classification [20.892833511657166]
Multi-view multi-label data in the real world is commonly incomplete due to the uncertain factors of data collection and manual annotation.
We propose a deep instance-level contrastive network, namely DICNet, to deal with the double incomplete multi-view multi-label classification problem.
Our DICNet is adept in capturing consistent discriminative representations of multi-view multi-label data and avoiding the negative effects of missing views and missing labels.
arXiv Detail & Related papers (2023-03-15T04:24:01Z) - Self-supervised Discriminative Feature Learning for Multi-view
Clustering [12.725701189049403]
We propose self-supervised discriminative feature learning for multi-view clustering (SDMVC)
Concretely, deep autoencoders are applied to learn embedded features for each view independently.
Experiments on various types of multi-view datasets show that SDMVC achieves state-of-the-art performance.
arXiv Detail & Related papers (2021-03-28T07:18:39Z) - Unsupervised Person Re-Identification with Multi-Label Learning Guided
Self-Paced Clustering [48.31017226618255]
Unsupervised person re-identification (Re-ID) has drawn increasing research attention recently.
In this paper, we address the unsupervised person Re-ID with a conceptually novel yet simple framework, termed as Multi-label Learning guided self-paced Clustering (MLC)
MLC mainly learns discriminative features with three crucial modules, namely a multi-scale network, a multi-label learning module, and a self-paced clustering module.
arXiv Detail & Related papers (2021-03-08T07:30:13Z) - Unsupervised Multi-view Clustering by Squeezing Hybrid Knowledge from
Cross View and Each View [68.88732535086338]
This paper proposes a new multi-view clustering method, low-rank subspace multi-view clustering based on adaptive graph regularization.
Experimental results for five widely used multi-view benchmarks show that our proposed algorithm surpasses other state-of-the-art methods by a clear margin.
arXiv Detail & Related papers (2020-08-23T08:25:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.