Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data
- URL: http://arxiv.org/abs/2211.04906v1
- Date: Tue, 8 Nov 2022 09:19:32 GMT
- Title: Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data
- Authors: Yiming Wang, Dongxia Chang, Zhiqiang Fu, Jie Wen, Yao Zhao
- Abstract summary: Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
- Score: 52.491074276133325
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view representation learning has developed rapidly over the past
decades and has been applied in many fields. However, most previous works
assumed that each view is complete and aligned. This leads to an inevitable
deterioration in their performance when encountering practical problems such as
missing or unaligned views. To address the challenge of representation learning
on partially aligned multi-view data, we propose a new cross-view graph
contrastive learning framework, which integrates multi-view information to
align data and learn latent representations. Compared with current approaches,
the proposed method has the following merits: (1) our model is an end-to-end
framework that simultaneously performs view-specific representation learning
via view-specific autoencoders and cluster-level data aligning by combining
multi-view information with the cross-view graph contrastive learning; (2) it
is easy to apply our model to explore information from three or more
modalities/sources as the cross-view graph contrastive learning is devised.
Extensive experiments conducted on several real datasets demonstrate the
effectiveness of the proposed method on the clustering and classification
tasks.
Related papers
- Discriminative Anchor Learning for Efficient Multi-view Clustering [59.11406089896875]
We propose discriminative anchor learning for multi-view clustering (DALMC)
We learn discriminative view-specific feature representations according to the original dataset.
We build anchors from different views based on these representations, which increase the quality of the shared anchor graph.
arXiv Detail & Related papers (2024-09-25T13:11:17Z) - Multi-view Fuzzy Representation Learning with Rules based Model [25.997490574254172]
Unsupervised multi-view representation learning has been extensively studied for mining multi-view data.
This paper proposes a new multi-view fuzzy representation learning method based on the interpretable Takagi-Sugeno-Kang fuzzy system (MVRL_FS)
arXiv Detail & Related papers (2023-09-20T17:13:15Z) - Dual Representation Learning for One-Step Clustering of Multi-View Data [30.131568561100817]
We propose a novel one-step multi-view clustering method by exploiting the dual representation of both the common and specific information of different views.
With this framework, the representation learning and clustering partition mutually benefit each other, which effectively improve the clustering performance.
arXiv Detail & Related papers (2022-08-30T14:20:26Z) - Latent Heterogeneous Graph Network for Incomplete Multi-View Learning [57.49776938934186]
We propose a novel Latent Heterogeneous Graph Network (LHGN) for incomplete multi-view learning.
By learning a unified latent representation, a trade-off between consistency and complementarity among different views is implicitly realized.
To avoid any inconsistencies between training and test phase, a transductive learning technique is applied based on graph learning for classification tasks.
arXiv Detail & Related papers (2022-08-29T15:14:21Z) - A unified framework based on graph consensus term for multi-view
learning [5.168659132277719]
We propose a novel multi-view learning framework, which aims to leverage most existing graph embedding works into a unified formula.
Our method explores the graph structure in each view independently to preserve the diversity property of graph embedding methods.
To this end, the diversity and complementary information among different views could be simultaneously considered.
arXiv Detail & Related papers (2021-05-25T09:22:21Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Multi-view Graph Learning by Joint Modeling of Consistency and
Inconsistency [65.76554214664101]
Graph learning has emerged as a promising technique for multi-view clustering with its ability to learn a unified and robust graph from multiple views.
We propose a new multi-view graph learning framework, which for the first time simultaneously models multi-view consistency and multi-view inconsistency in a unified objective function.
Experiments on twelve multi-view datasets have demonstrated the robustness and efficiency of the proposed approach.
arXiv Detail & Related papers (2020-08-24T06:11:29Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.