Embedding Graph Auto-Encoder for Graph Clustering
- URL: http://arxiv.org/abs/2002.08643v2
- Date: Sat, 13 Mar 2021 08:07:29 GMT
- Title: Embedding Graph Auto-Encoder for Graph Clustering
- Authors: Hongyuan Zhang and Rui Zhang and Xuelong Li
- Abstract summary: Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
- Score: 90.8576971748142
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph clustering, aiming to partition nodes of a graph into various groups
via an unsupervised approach, is an attractive topic in recent years. To
improve the representative ability, several graph auto-encoder (GAE) models,
which are based on semi-supervised graph convolution networks (GCN), have been
developed and they achieve good results compared with traditional clustering
methods. However, all existing methods either fail to utilize the orthogonal
property of the representations generated by GAE, or separate the clustering
and the learning of neural networks. We first prove that the relaxed k-means
will obtain an optimal partition in the inner-products used space. Driven by
theoretical analysis about relaxed k-means, we design a specific GAE-based
model for graph clustering to be consistent with the theory, namely Embedding
Graph Auto-Encoder (EGAE). Meanwhile, the learned representations are well
explainable such that the representations can be also used for other tasks. To
further induce the neural network to produce deep features that are appropriate
for the specific clustering model, the relaxed k-means and GAE are learned
simultaneously. Therefore, the relaxed k-means can be equivalently regarded as
a decoder that attempts to learn representations that can be linearly
constructed by some centroid vectors. Accordingly, EGAE consists of one encoder
and dual decoders. Extensive experiments are conducted to prove the superiority
of EGAE and the corresponding theoretical analyses.
Related papers
- Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Large Scale Training of Graph Neural Networks for Optimal Markov-Chain Partitioning Using the Kemeny Constant [1.8606770727950463]
We propose several GNN-based architectures to tackle the graph partitioning problem for Markov Chains described as kinetic networks.
This approach aims to minimize how much a proposed partitioning changes the Kemeny constant.
We show how simple GraphSAGE-based GNNs with linear layers can outperform much larger and more expressive attention-based models in this context.
arXiv Detail & Related papers (2023-12-22T17:19:50Z) - Neural Tangent Kernels Motivate Graph Neural Networks with
Cross-Covariance Graphs [94.44374472696272]
We investigate NTKs and alignment in the context of graph neural networks (GNNs)
Our results establish the theoretical guarantees on the optimality of the alignment for a two-layer GNN.
These guarantees are characterized by the graph shift operator being a function of the cross-covariance between the input and the output data.
arXiv Detail & Related papers (2023-10-16T19:54:21Z) - Graph Neural Network-Inspired Kernels for Gaussian Processes in
Semi-Supervised Learning [4.644263115284322]
Graph neural networks (GNNs) emerged recently as a promising class of models for graph-structured data in semi-supervised learning.
We introduce this inductive bias into GPs to improve their predictive performance for graph-structured data.
We show that these graph-based kernels lead to competitive classification and regression performance, as well as advantages in time, compared with the respective GNNs.
arXiv Detail & Related papers (2023-02-12T01:07:56Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Infinitely Wide Graph Convolutional Networks: Semi-supervised Learning
via Gaussian Processes [144.6048446370369]
Graph convolutional neural networks(GCNs) have recently demonstrated promising results on graph-based semi-supervised classification.
We propose a GP regression model via GCNs(GPGC) for graph-based semi-supervised learning.
We conduct extensive experiments to evaluate GPGC and demonstrate that it outperforms other state-of-the-art methods.
arXiv Detail & Related papers (2020-02-26T10:02:32Z) - Block-Approximated Exponential Random Graphs [77.4792558024487]
An important challenge in the field of exponential random graphs (ERGs) is the fitting of non-trivial ERGs on large graphs.
We propose an approximative framework to such non-trivial ERGs that result in dyadic independence (i.e., edge independent) distributions.
Our methods are scalable to sparse graphs consisting of millions of nodes.
arXiv Detail & Related papers (2020-02-14T11:42:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.