TCGF: A unified tensorized consensus graph framework for multi-view
representation learning
- URL: http://arxiv.org/abs/2309.09987v1
- Date: Thu, 14 Sep 2023 19:29:14 GMT
- Title: TCGF: A unified tensorized consensus graph framework for multi-view
representation learning
- Authors: Xiangzhu Meng, Wei Wei, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: This paper proposes a universal multi-view representation learning framework named Consensus Graph Framework (TCGF)
It first provides a unified framework for existing multi-view works to exploit the representations for individual view.
Then, stacks them into a tensor under alignment basics as a high-order representation, allowing for the smooth propagation of consistency.
- Score: 27.23929515170454
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view learning techniques have recently gained significant attention in
the machine learning domain for their ability to leverage consistency and
complementary information across multiple views. However, there remains a lack
of sufficient research on generalized multi-view frameworks that unify existing
works into a scalable and robust learning framework, as most current works
focus on specific styles of multi-view models. Additionally, most multi-view
learning works rely heavily on specific-scale scenarios and fail to effectively
comprehend multiple scales holistically. These limitations hinder the effective
fusion of essential information from multiple views, resulting in poor
generalization. To address these limitations, this paper proposes a universal
multi-view representation learning framework named Tensorized Consensus Graph
Framework (TCGF). Specifically, it first provides a unified framework for
existing multi-view works to exploit the representations for individual view,
which aims to be suitable for arbitrary assumptions and different-scales
datasets. Then, stacks them into a tensor under alignment basics as a
high-order representation, allowing for the smooth propagation of consistency
and complementary information across all views. Moreover, TCGF proposes
learning a consensus embedding shared by adaptively collaborating all views to
uncover the essential structure of the multi-view data, which utilizes
view-consensus grouping effect to regularize the view-consensus representation.
To further facilitate related research, we provide a specific implementation of
TCGF for large-scale datasets, which can be efficiently solved by applying the
alternating optimization strategy. Experimental results conducted on seven
different-scales datasets indicate the superiority of the proposed TCGF against
existing state-of-the-art multi-view learning methods.
Related papers
- Semi-supervised multi-view concept decomposition [30.699496411869834]
Concept Factorization (CF) has demonstrated superior performance in multi-view clustering tasks.
We propose a novel semi-supervised multi-view concept factorization model, named SMVCF.
We conduct experiments on four diverse datasets to evaluate the performance of SMVCF.
arXiv Detail & Related papers (2023-07-03T10:50:44Z) - Cross-view Graph Contrastive Representation Learning on Partially
Aligned Multi-view Data [52.491074276133325]
Multi-view representation learning has developed rapidly over the past decades and has been applied in many fields.
We propose a new cross-view graph contrastive learning framework, which integrates multi-view information to align data and learn latent representations.
Experiments conducted on several real datasets demonstrate the effectiveness of the proposed method on the clustering and classification tasks.
arXiv Detail & Related papers (2022-11-08T09:19:32Z) - Dual Representation Learning for One-Step Clustering of Multi-View Data [30.131568561100817]
We propose a novel one-step multi-view clustering method by exploiting the dual representation of both the common and specific information of different views.
With this framework, the representation learning and clustering partition mutually benefit each other, which effectively improve the clustering performance.
arXiv Detail & Related papers (2022-08-30T14:20:26Z) - Latent Heterogeneous Graph Network for Incomplete Multi-View Learning [57.49776938934186]
We propose a novel Latent Heterogeneous Graph Network (LHGN) for incomplete multi-view learning.
By learning a unified latent representation, a trade-off between consistency and complementarity among different views is implicitly realized.
To avoid any inconsistencies between training and test phase, a transductive learning technique is applied based on graph learning for classification tasks.
arXiv Detail & Related papers (2022-08-29T15:14:21Z) - A unified framework based on graph consensus term for multi-view
learning [5.168659132277719]
We propose a novel multi-view learning framework, which aims to leverage most existing graph embedding works into a unified formula.
Our method explores the graph structure in each view independently to preserve the diversity property of graph embedding methods.
To this end, the diversity and complementary information among different views could be simultaneously considered.
arXiv Detail & Related papers (2021-05-25T09:22:21Z) - Deep Partial Multi-View Learning [94.39367390062831]
We propose a novel framework termed Cross Partial Multi-View Networks (CPM-Nets)
We fifirst provide a formal defifinition of completeness and versatility for multi-view representation.
We then theoretically prove the versatility of the learned latent representations.
arXiv Detail & Related papers (2020-11-12T02:29:29Z) - Multi-view Graph Learning by Joint Modeling of Consistency and
Inconsistency [65.76554214664101]
Graph learning has emerged as a promising technique for multi-view clustering with its ability to learn a unified and robust graph from multiple views.
We propose a new multi-view graph learning framework, which for the first time simultaneously models multi-view consistency and multi-view inconsistency in a unified objective function.
Experiments on twelve multi-view datasets have demonstrated the robustness and efficiency of the proposed approach.
arXiv Detail & Related papers (2020-08-24T06:11:29Z) - Embedded Deep Bilinear Interactive Information and Selective Fusion for
Multi-view Learning [70.67092105994598]
We propose a novel multi-view learning framework to make the multi-view classification better aimed at the above-mentioned two aspects.
In particular, we train different deep neural networks to learn various intra-view representations.
Experiments on six publicly available datasets demonstrate the effectiveness of the proposed method.
arXiv Detail & Related papers (2020-07-13T01:13:23Z) - Generative Partial Multi-View Clustering [133.36721417531734]
We propose a generative partial multi-view clustering model, named as GP-MVC, to address the incomplete multi-view problem.
First, multi-view encoder networks are trained to learn common low-dimensional representations, followed by a clustering layer to capture the consistent cluster structure across multiple views.
Second, view-specific generative adversarial networks are developed to generate the missing data of one view conditioning on the shared representation given by other views.
arXiv Detail & Related papers (2020-03-29T17:48:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.