Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning
- URL: http://arxiv.org/abs/2312.05519v1
- Date: Sat, 9 Dec 2023 10:16:53 GMT
- Title: Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning
- Authors: Hanxuan Yang, Qingchao Kong and Wenji Mao
- Abstract summary: We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
- Score: 9.039193854524763
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph representation learning is a fundamental research theme and can be
generalized to benefit multiple downstream tasks from the node and link levels
to the higher graph level. In practice, it is desirable to develop
task-agnostic general graph representation learning methods that are typically
trained in an unsupervised manner. Related research reveals that the power of
graph representation learning methods depends on whether they can differentiate
distinct graph structures as different embeddings and map isomorphic graphs to
consistent embeddings (i.e., the isomorphic consistency of graph models).
However, for task-agnostic general graph representation learning, existing
unsupervised graph models, represented by the variational graph auto-encoders
(VGAEs), can only keep the isomorphic consistency within the subgraphs of 1-hop
neighborhoods and thus usually manifest inferior performance on the more
difficult higher-level tasks. To overcome the limitations of existing
unsupervised methods, in this paper, we propose the Isomorphic-Consistent VGAE
(IsoC-VGAE) for multi-level task-agnostic graph representation learning. We
first devise a decoding scheme to provide a theoretical guarantee of keeping
the isomorphic consistency under the settings of unsupervised learning. We then
propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive
realization, which trains the model via reconstructing the GNN node embeddings
with multi-hop neighborhood information, so as to maintain the high-order
isomorphic consistency within the VGAE framework. We conduct extensive
experiments on the representative graph learning tasks at different levels,
including node classification, link prediction and graph classification, and
the results verify that our proposed model generally outperforms both the
state-of-the-art unsupervised methods and representative supervised methods.
Related papers
- SPGNN: Recognizing Salient Subgraph Patterns via Enhanced Graph Convolution and Pooling [25.555741218526464]
Graph neural networks (GNNs) have revolutionized the field of machine learning on non-Euclidean data such as graphs and networks.
We propose a concatenation-based graph convolution mechanism that injectively updates node representations.
We also design a novel graph pooling module, called WL-SortPool, to learn important subgraph patterns in a deep-learning manner.
arXiv Detail & Related papers (2024-04-21T13:11:59Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Similarity-aware Positive Instance Sampling for Graph Contrastive
Pre-training [82.68805025636165]
We propose to select positive graph instances directly from existing graphs in the training set.
Our selection is based on certain domain-specific pair-wise similarity measurements.
Besides, we develop an adaptive node-level pre-training method to dynamically mask nodes to distribute them evenly in the graph.
arXiv Detail & Related papers (2022-06-23T20:12:51Z) - Towards Graph Self-Supervised Learning with Contrastive Adjusted Zooming [48.99614465020678]
We introduce a novel self-supervised graph representation learning algorithm via Graph Contrastive Adjusted Zooming.
This mechanism enables G-Zoom to explore and extract self-supervision signals from a graph from multiple scales.
We have conducted extensive experiments on real-world datasets, and the results demonstrate that our proposed model outperforms state-of-the-art methods consistently.
arXiv Detail & Related papers (2021-11-20T22:45:53Z) - Multi-Level Graph Contrastive Learning [38.022118893733804]
We propose a Multi-Level Graph Contrastive Learning (MLGCL) framework for learning robust representation of graph data by contrasting space views of graphs.
The original graph is first-order approximation structure and contains uncertainty or error, while the $k$NN graph generated by encoding features preserves high-order proximity.
Extensive experiments indicate MLGCL achieves promising results compared with the existing state-of-the-art graph representation learning methods on seven datasets.
arXiv Detail & Related papers (2021-07-06T14:24:43Z) - A Robust and Generalized Framework for Adversarial Graph Embedding [73.37228022428663]
We propose a robust framework for adversarial graph embedding, named AGE.
AGE generates the fake neighbor nodes as the enhanced negative samples from the implicit distribution.
Based on this framework, we propose three models to handle three types of graph data.
arXiv Detail & Related papers (2021-05-22T07:05:48Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z) - Unsupervised Hierarchical Graph Representation Learning by Mutual
Information Maximization [8.14036521415919]
We present an unsupervised graph representation learning method, Unsupervised Hierarchical Graph Representation (UHGR)
Our method focuses on maximizing mutual information between "local" and high-level "global" representations.
The results show that the proposed method achieves comparable results to state-of-the-art supervised methods on several benchmarks.
arXiv Detail & Related papers (2020-03-18T18:21:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.