Self-supervised Semi-implicit Graph Variational Auto-encoders with
Masking
- URL: http://arxiv.org/abs/2301.12458v1
- Date: Sun, 29 Jan 2023 15:00:43 GMT
- Title: Self-supervised Semi-implicit Graph Variational Auto-encoders with
Masking
- Authors: Xiang Li, Tiandi Ye, Caihua Shan, Dongsheng Li, Ming Gao
- Abstract summary: We propose the SeeGera model, which is based on the family of self-supervised variational graph auto-encoder (VGAE)
SeeGera co-embeds both nodes and features in the encoder and reconstructs both links and features in the decoder.
We conduct extensive experiments comparing SeeGera with 9 other state-of-the-art competitors.
- Score: 18.950919307926824
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Generative graph self-supervised learning (SSL) aims to learn node
representations by reconstructing the input graph data. However, most existing
methods focus on unsupervised learning tasks only and very few work has shown
its superiority over the state-of-the-art graph contrastive learning (GCL)
models, especially on the classification task. While a very recent model has
been proposed to bridge the gap, its performance on unsupervised learning tasks
is still unknown. In this paper, to comprehensively enhance the performance of
generative graph SSL against other GCL models on both unsupervised and
supervised learning tasks, we propose the SeeGera model, which is based on the
family of self-supervised variational graph auto-encoder (VGAE). Specifically,
SeeGera adopts the semi-implicit variational inference framework, a
hierarchical variational framework, and mainly focuses on feature
reconstruction and structure/feature masking. On the one hand, SeeGera
co-embeds both nodes and features in the encoder and reconstructs both links
and features in the decoder. Since feature embeddings contain rich semantic
information on features, they can be combined with node embeddings to provide
fine-grained knowledge for feature reconstruction. On the other hand, SeeGera
adds an additional layer for structure/feature masking to the hierarchical
variational framework, which boosts the model generalizability. We conduct
extensive experiments comparing SeeGera with 9 other state-of-the-art
competitors. Our results show that SeeGera can compare favorably against other
state-of-the-art GCL methods in a variety of unsupervised and supervised
learning tasks.
Related papers
- GRE^2-MDCL: Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning [0.0]
Graph representation learning has emerged as a powerful tool for preserving graph topology when mapping nodes to vector representations.
Current graph neural network models face the challenge of requiring extensive labeled data.
We propose Graph Representation Embedding Enhanced via Multidimensional Contrastive Learning.
arXiv Detail & Related papers (2024-09-12T03:09:05Z) - Hi-GMAE: Hierarchical Graph Masked Autoencoders [90.30572554544385]
Hierarchical Graph Masked AutoEncoders (Hi-GMAE)
Hi-GMAE is a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs.
Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
arXiv Detail & Related papers (2024-05-17T09:08:37Z) - Deep Contrastive Graph Learning with Clustering-Oriented Guidance [61.103996105756394]
Graph Convolutional Network (GCN) has exhibited remarkable potential in improving graph-based clustering.
Models estimate an initial graph beforehand to apply GCN.
Deep Contrastive Graph Learning (DCGL) model is proposed for general data clustering.
arXiv Detail & Related papers (2024-02-25T07:03:37Z) - Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Representation Learning via Contrasting Cluster Assignments [57.87743170674533]
We propose a novel unsupervised graph representation model by contrasting cluster assignments, called as GRCCA.
It is motivated to make good use of local and global information synthetically through combining clustering algorithms and contrastive learning.
GRCCA has strong competitiveness in most tasks.
arXiv Detail & Related papers (2021-12-15T07:28:58Z) - AutoGCL: Automated Graph Contrastive Learning via Learnable View
Generators [22.59182542071303]
We propose a novel framework named Automated Graph Contrastive Learning (AutoGCL) in this paper.
AutoGCL employs a set of learnable graph view generators orchestrated by an auto augmentation strategy.
Experiments on semi-supervised learning, unsupervised learning, and transfer learning demonstrate the superiority of our framework over the state-of-the-arts in graph contrastive learning.
arXiv Detail & Related papers (2021-09-21T15:34:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.