Decoupled Contrastive Multi-View Clustering with High-Order Random Walks
- URL: http://arxiv.org/abs/2308.11164v2
- Date: Thu, 18 Jan 2024 13:01:03 GMT
- Title: Decoupled Contrastive Multi-View Clustering with High-Order Random Walks
- Authors: Yiding Lu, Yijie Lin, Mouxing Yang, Dezhong Peng, Peng Hu, Xi Peng
- Abstract summary: We propose a novel robust method dubbed decoupled contrastive multi-view clustering with high-order random walks (DIVIDE)
In brief, DIVIDE leverages random walks to progressively identify data pairs in a global instead of local manner.
DIVIDE could identify in-neighborhood negatives and out-of-neighborhood positives.
- Score: 25.03805821839733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent, some robust contrastive multi-view clustering (MvC) methods have
been proposed, which construct data pairs from neighborhoods to alleviate the
false negative issue, i.e., some intra-cluster samples are wrongly treated as
negative pairs. Although promising performance has been achieved by these
methods, the false negative issue is still far from addressed and the false
positive issue emerges because all in- and out-of-neighborhood samples are
simply treated as positive and negative, respectively. To address the issues,
we propose a novel robust method, dubbed decoupled contrastive multi-view
clustering with high-order random walks (DIVIDE). In brief, DIVIDE leverages
random walks to progressively identify data pairs in a global instead of local
manner. As a result, DIVIDE could identify in-neighborhood negatives and
out-of-neighborhood positives. Moreover, DIVIDE embraces a novel MvC
architecture to perform inter- and intra-view contrastive learning in different
embedding spaces, thus boosting clustering performance and embracing the
robustness against missing views. To verify the efficacy of DIVIDE, we carry
out extensive experiments on four benchmark datasets comparing with nine
state-of-the-art MvC methods in both complete and incomplete MvC settings.
Related papers
- Task-oriented Embedding Counts: Heuristic Clustering-driven Feature Fine-tuning for Whole Slide Image Classification [1.292108130501585]
We propose a clustering-driven feature fine-tuning method (HC-FT) to enhance the performance of multiple instance learning.
The proposed method is evaluated on both CAMELYON16 and BRACS datasets, achieving an AUC of 97.13% and 85.85%, respectively.
arXiv Detail & Related papers (2024-06-02T08:53:45Z) - Deep Contrastive Multi-view Clustering under Semantic Feature Guidance [8.055452424643562]
We propose a multi-view clustering framework named Deep Contrastive Multi-view Clustering under Semantic feature guidance (DCMCS)
By minimizing instance-level contrastive loss weighted by semantic similarity, DCMCS adaptively weakens contrastive leaning between false negative pairs.
Experimental results on several public datasets demonstrate the proposed framework outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2024-03-09T02:33:38Z) - Deep Incomplete Multi-view Clustering with Cross-view Partial Sample and
Prototype Alignment [50.82982601256481]
We propose a Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incomplete Multi-view Clustering.
Unlike existing contrastive-based methods, we adopt pair-observed data alignment as 'proxy supervised signals' to guide instance-to-instance correspondence construction.
arXiv Detail & Related papers (2023-03-28T02:31:57Z) - Cluster-guided Contrastive Graph Clustering Network [53.16233290797777]
We propose a Cluster-guided Contrastive deep Graph Clustering network (CCGC)
We construct two views of the graph by designing special Siamese encoders whose weights are not shared between the sibling sub-networks.
To construct semantic meaningful negative sample pairs, we regard the centers of different high-confidence clusters as negative samples.
arXiv Detail & Related papers (2023-01-03T13:42:38Z) - Exploring Non-Contrastive Representation Learning for Deep Clustering [23.546602131801205]
Non-contrastive representation learning for deep clustering, termed NCC, is based on BYOL, a representative method without negative examples.
NCC forms an embedding space where all clusters are well-separated and within-cluster examples are compact.
Experimental results on several clustering benchmark datasets including ImageNet-1K demonstrate that NCC outperforms the state-of-the-art methods by a significant margin.
arXiv Detail & Related papers (2021-11-23T12:21:53Z) - Contrastive Attraction and Contrastive Repulsion for Representation
Learning [131.72147978462348]
Contrastive learning (CL) methods learn data representations in a self-supervision manner, where the encoder contrasts each positive sample over multiple negative samples.
Recent CL methods have achieved promising results when pretrained on large-scale datasets, such as ImageNet.
We propose a doubly CL strategy that separately compares positive and negative samples within their own groups, and then proceeds with a contrast between positive and negative groups.
arXiv Detail & Related papers (2021-05-08T17:25:08Z) - Solving Inefficiency of Self-supervised Representation Learning [87.30876679780532]
Existing contrastive learning methods suffer from very low learning efficiency.
Under-clustering and over-clustering problems are major obstacles to learning efficiency.
We propose a novel self-supervised learning framework using a median triplet loss.
arXiv Detail & Related papers (2021-04-18T07:47:10Z) - Doubly Contrastive Deep Clustering [135.7001508427597]
We present a novel Doubly Contrastive Deep Clustering (DCDC) framework, which constructs contrastive loss over both sample and class views.
Specifically, for the sample view, we set the class distribution of the original sample and its augmented version as positive sample pairs.
For the class view, we build the positive and negative pairs from the sample distribution of the class.
In this way, two contrastive losses successfully constrain the clustering results of mini-batch samples in both sample and class level.
arXiv Detail & Related papers (2021-03-09T15:15:32Z) - Contrastive Clustering [57.71729650297379]
We propose Contrastive Clustering (CC) which explicitly performs the instance- and cluster-level contrastive learning.
In particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100) dataset, which is an up to 19% (39%) performance improvement compared with the best baseline.
arXiv Detail & Related papers (2020-09-21T08:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.