A Clustering-guided Contrastive Fusion for Multi-view Representation
Learning
- URL: http://arxiv.org/abs/2212.13726v4
- Date: Fri, 4 Aug 2023 13:20:43 GMT
- Title: A Clustering-guided Contrastive Fusion for Multi-view Representation
Learning
- Authors: Guanzhou Ke, Guoqing Chao, Xiaoli Wang, Chenyang Xu, Yongqi Zhu, and
Yang Yu
- Abstract summary: We propose a deep fusion network to fuse view-specific representations into the view-common representation.
We also design an asymmetrical contrastive strategy that aligns the view-common representation and each view-specific representation.
In the incomplete view scenario, our proposed method resists noise interference better than those of our competitors.
- Score: 7.630965478083513
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The past two decades have seen increasingly rapid advances in the field of
multi-view representation learning due to it extracting useful information from
diverse domains to facilitate the development of multi-view applications.
However, the community faces two challenges: i) how to learn robust
representations from a large amount of unlabeled data to against noise or
incomplete views setting, and ii) how to balance view consistency and
complementary for various downstream tasks. To this end, we utilize a deep
fusion network to fuse view-specific representations into the view-common
representation, extracting high-level semantics for obtaining robust
representation. In addition, we employ a clustering task to guide the fusion
network to prevent it from leading to trivial solutions. For balancing
consistency and complementary, then, we design an asymmetrical contrastive
strategy that aligns the view-common representation and each view-specific
representation. These modules are incorporated into a unified method known as
CLustering-guided cOntrastiVE fusioN (CLOVEN). We quantitatively and
qualitatively evaluate the proposed method on five datasets, demonstrating that
CLOVEN outperforms 11 competitive multi-view learning methods in clustering and
classification. In the incomplete view scenario, our proposed method resists
noise interference better than those of our competitors. Furthermore, the
visualization analysis shows that CLOVEN can preserve the intrinsic structure
of view-specific representation while also improving the compactness of
view-commom representation. Our source code will be available soon at
https://github.com/guanzhou-ke/cloven.
Related papers
- Rethinking Multi-view Representation Learning via Distilled Disentangling [34.14711778177439]
Multi-view representation learning aims to derive robust representations that are both view-consistent and view-specific from diverse data sources.
This paper presents an in-depth analysis of existing approaches in this domain, highlighting the redundancy between view-consistent and view-specific representations.
We propose an innovative framework for multi-view representation learning, which incorporates a technique we term 'distilled disentangling'
arXiv Detail & Related papers (2024-03-16T11:21:24Z) - Incomplete Contrastive Multi-View Clustering with High-Confidence
Guiding [7.305817202715752]
We propose a novel Incomplete Contrastive Multi-View Clustering method with high-confidence guiding (ICMVC)
Firstly, we proposed a multi-view consistency relation transfer plus graph convolutional network to tackle missing values problem.
Secondly, instance-level attention fusion and high-confidence guiding are proposed to exploit the complementary information.
arXiv Detail & Related papers (2023-12-14T07:28:41Z) - DealMVC: Dual Contrastive Calibration for Multi-view Clustering [78.54355167448614]
We propose a novel Dual contrastive calibration network for Multi-View Clustering (DealMVC)
We first design a fusion mechanism to obtain a global cross-view feature. Then, a global contrastive calibration loss is proposed by aligning the view feature similarity graph and the high-confidence pseudo-label graph.
During the training procedure, the interacted cross-view feature is jointly optimized at both local and global levels.
arXiv Detail & Related papers (2023-08-17T14:14:28Z) - Disentangling Multi-view Representations Beyond Inductive Bias [32.15900989696017]
We propose a novel multi-view representation disentangling method that ensures both interpretability and generalizability of the resulting representations.
Our experiments on four multi-view datasets demonstrate that our proposed method outperforms 12 comparison methods in terms of clustering and classification performance.
arXiv Detail & Related papers (2023-08-03T09:09:28Z) - Deep Incomplete Multi-view Clustering with Cross-view Partial Sample and
Prototype Alignment [50.82982601256481]
We propose a Cross-view Partial Sample and Prototype Alignment Network (CPSPAN) for Deep Incomplete Multi-view Clustering.
Unlike existing contrastive-based methods, we adopt pair-observed data alignment as 'proxy supervised signals' to guide instance-to-instance correspondence construction.
arXiv Detail & Related papers (2023-03-28T02:31:57Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - MORI-RAN: Multi-view Robust Representation Learning via Hybrid
Contrastive Fusion [4.36488705757229]
Multi-view representation learning is essential for many multi-view tasks, such as clustering and classification.
We propose a hybrid contrastive fusion algorithm to extract robust view-common representation from unlabeled data.
Experimental results demonstrated that the proposed method outperforms 12 competitive multi-view methods on four real-world datasets.
arXiv Detail & Related papers (2022-08-26T09:58:37Z) - Deep Multi-View Semi-Supervised Clustering with Sample Pairwise
Constraints [10.226754903113164]
We propose a novel Deep Multi-view Semi-supervised Clustering (DMSC) method, which jointly optimize three kinds of losses during networks finetuning.
We demonstrate that our proposed approach performs better than the state-of-the-art multi-view and single-view competitors.
arXiv Detail & Related papers (2022-06-10T08:51:56Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Agglomerative Neural Networks for Multi-view Clustering [109.55325971050154]
We propose the agglomerative analysis to approximate the optimal consensus view.
We present Agglomerative Neural Network (ANN) based on Constrained Laplacian Rank to cluster multi-view data directly.
Our evaluations against several state-of-the-art multi-view clustering approaches on four popular datasets show the promising view-consensus analysis ability of ANN.
arXiv Detail & Related papers (2020-05-12T05:39:10Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.