Variational Graph Generator for Multi-View Graph Clustering
- URL: http://arxiv.org/abs/2210.07011v1
- Date: Thu, 13 Oct 2022 13:19:51 GMT
- Title: Variational Graph Generator for Multi-View Graph Clustering
- Authors: Jianpeng Chen, Yawen Ling, Jie Xu, Yazhou Ren, Shudong Huang, Xiaorong
Pu, Lifang He
- Abstract summary: We propose Variational Graph Generator for Multi-View Graph Clustering (VGMGC)
A novel variational graph generator is proposed to infer a reliable variational consensus graph based on a priori assumption over multiple graphs.
A simple yet effective graph encoder in conjunction with the multi-view clustering objective is presented to learn the desired graph embeddings for clustering.
- Score: 13.721803208437755
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multi-view graph clustering (MGC) methods are increasingly being studied due
to the rising of multi-view data with graph structural information. The
critical point of MGC is to better utilize the view-specific and view-common
information in features and graphs of multiple views. However, existing works
have an inherent limitation that they are unable to concurrently utilize the
consensus graph information across multiple graphs and the view-specific
feature information. To address this issue, we propose Variational Graph
Generator for Multi-View Graph Clustering (VGMGC). Specifically, a novel
variational graph generator is proposed to infer a reliable variational
consensus graph based on a priori assumption over multiple graphs. Then a
simple yet effective graph encoder in conjunction with the multi-view
clustering objective is presented to learn the desired graph embeddings for
clustering, which embeds the consensus and view-specific graphs together with
features. Finally, theoretical results illustrate the rationality of VGMGC by
analyzing the uncertainty of the inferred consensus graph with information
bottleneck principle. Extensive experiments demonstrate the superior
performance of our VGMGC over SOTAs.
Related papers
- Dual-Optimized Adaptive Graph Reconstruction for Multi-View Graph Clustering [19.419832637206138]
We propose a novel multi-view graph clustering method based on dual-optimized adaptive graph reconstruction, named DOAGC.
It mainly aims to reconstruct the graph structure adapted to traditional GNNs to deal with heterophilous graph issues while maintaining the advantages of traditional GNNs.
arXiv Detail & Related papers (2024-10-30T12:50:21Z) - InstructG2I: Synthesizing Images from Multimodal Attributed Graphs [50.852150521561676]
We propose a graph context-conditioned diffusion model called InstructG2I.
InstructG2I first exploits the graph structure and multimodal information to conduct informative neighbor sampling.
A Graph-QFormer encoder adaptively encodes the graph nodes into an auxiliary set of graph prompts to guide the denoising process.
arXiv Detail & Related papers (2024-10-09T17:56:15Z) - Multiview Graph Learning with Consensus Graph [24.983233822595274]
Graph topology inference is a significant task in many application domains.
Many modern datasets are heterogeneous or mixed and involve multiple related graphs, i.e., multiview graphs.
We propose an alternative method based on consensus regularization, where views are ensured to be similar.
It is also employed to infer the functional brain connectivity networks of multiple subjects from their electroencephalogram (EEG) recordings.
arXiv Detail & Related papers (2024-01-24T19:35:54Z) - MGNet: Learning Correspondences via Multiple Graphs [78.0117352211091]
Learning correspondences aims to find correct correspondences from the initial correspondence set with an uneven correspondence distribution and a low inlier rate.
Recent advances usually use graph neural networks (GNNs) to build a single type of graph or stack local graphs into the global one to complete the task.
We propose MGNet to effectively combine multiple complementary graphs.
arXiv Detail & Related papers (2024-01-10T07:58:44Z) - Graph Mixture of Experts: Learning on Large-Scale Graphs with Explicit
Diversity Modeling [60.0185734837814]
Graph neural networks (GNNs) have found extensive applications in learning from graph data.
To bolster the generalization capacity of GNNs, it has become customary to augment training graph structures with techniques like graph augmentations.
This study introduces the concept of Mixture-of-Experts (MoE) to GNNs, with the aim of augmenting their capacity to adapt to a diverse range of training graph structures.
arXiv Detail & Related papers (2023-04-06T01:09:36Z) - Deep Graph-Level Clustering Using Pseudo-Label-Guided Mutual Information
Maximization Network [31.38584638254226]
We study the problem of partitioning a set of graphs into different groups such that the graphs in the same group are similar while the graphs in different groups are dissimilar.
To solve the problem, we propose a novel method called Deep Graph-Level Clustering (DGLC)
Our DGLC achieves graph-level representation learning and graph-level clustering in an end-to-end manner.
arXiv Detail & Related papers (2023-02-05T12:28:08Z) - CGMN: A Contrastive Graph Matching Network for Self-Supervised Graph
Similarity Learning [65.1042892570989]
We propose a contrastive graph matching network (CGMN) for self-supervised graph similarity learning.
We employ two strategies, namely cross-view interaction and cross-graph interaction, for effective node representation learning.
We transform node representations into graph-level representations via pooling operations for graph similarity computation.
arXiv Detail & Related papers (2022-05-30T13:20:26Z) - Multi-view Contrastive Graph Clustering [12.463334005083379]
We propose a generic framework to cluster multi-view attributed graph data.
Inspired by the success of contrastive learning, we propose multi-view contrastive graph clustering (MCGC) method.
Our simple approach outperforms existing deep learning-based methods.
arXiv Detail & Related papers (2021-10-22T15:22:42Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - Dirichlet Graph Variational Autoencoder [65.94744123832338]
We present Dirichlet Graph Variational Autoencoder (DGVAE) with graph cluster memberships as latent factors.
Motivated by the low pass characteristics in balanced graph cut, we propose a new variant of GNN named Heatts to encode the input graph into cluster memberships.
arXiv Detail & Related papers (2020-10-09T07:35:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.