Revisiting and Benchmarking Graph Autoencoders: A Contrastive Learning Perspective
- URL: http://arxiv.org/abs/2410.10241v1
- Date: Mon, 14 Oct 2024 07:59:30 GMT
- Title: Revisiting and Benchmarking Graph Autoencoders: A Contrastive Learning Perspective
- Authors: Jintang Li, Ruofan Wu, Yuchang Zhu, Huizhe Zhang, Xinzhou Jin, Guibin Zhang, Zulun Zhu, Zibin Zheng, Liang Chen,
- Abstract summary: Graph autoencoders (GAEs) are self-supervised learning models that can learn meaningful representations of graph-structured data.
We introduce lrGAE, a general and powerful GAE framework that leverages contrastive learning principles to learn meaningful representations.
- Score: 28.152560472541143
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph autoencoders (GAEs) are self-supervised learning models that can learn meaningful representations of graph-structured data by reconstructing the input graph from a low-dimensional latent space. Over the past few years, GAEs have gained significant attention in academia and industry. In particular, the recent advent of GAEs with masked autoencoding schemes marks a significant advancement in graph self-supervised learning research. While numerous GAEs have been proposed, the underlying mechanisms of GAEs are not well understood, and a comprehensive benchmark for GAEs is still lacking. In this work, we bridge the gap between GAEs and contrastive learning by establishing conceptual and methodological connections. We revisit the GAEs studied in previous works and demonstrate how contrastive learning principles can be applied to GAEs. Motivated by these insights, we introduce lrGAE (left-right GAE), a general and powerful GAE framework that leverages contrastive learning principles to learn meaningful representations. Our proposed lrGAE not only facilitates a deeper understanding of GAEs but also sets a new benchmark for GAEs across diverse graph-based learning tasks. The source code for lrGAE, including the baselines and all the code for reproducing the results, is publicly available at https://github.com/EdisonLeeeee/lrGAE.
Related papers
- SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Graph Contrastive Learning with Generative Adversarial Network [35.564028359355596]
Graph generative adversarial networks (GANs) learn the distribution of views for Graph Contrastive Learning (GCL)
We present GACN, a novel Generative Adversarial Contrastive learning Network for graph representation learning.
We show that GACN is able to generate high-quality augmented views for GCL and is superior to twelve state-of-the-art baseline methods.
arXiv Detail & Related papers (2023-08-01T13:28:24Z) - New Frontiers in Graph Autoencoders: Joint Community Detection and Link
Prediction [27.570978996576503]
Graph autoencoders (GAE) and variational graph autoencoders (VGAE) emerged as powerful methods for link prediction (LP)
It is still unclear to what extent one can improve CD with GAE and VGAE, especially in the absence of node features.
We show that jointly addressing these two tasks with high accuracy is possible.
arXiv Detail & Related papers (2022-11-16T15:26:56Z) - Let Invariant Rationale Discovery Inspire Graph Contrastive Learning [98.10268114789775]
We argue that a high-performing augmentation should preserve the salient semantics of anchor graphs regarding instance-discrimination.
We propose a new framework, Rationale-aware Graph Contrastive Learning (RGCL)
RGCL uses a rationale generator to reveal salient features about graph instance-discrimination as the rationale, and then creates rationale-aware views for contrastive learning.
arXiv Detail & Related papers (2022-06-16T01:28:40Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders [32.42097625708298]
MaskGAE is a self-supervised learning framework for graph-structured data.
MGM is a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure.
We establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs.
arXiv Detail & Related papers (2022-05-20T09:45:57Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Graph Learning based Recommender Systems: A Review [111.43249652335555]
Graph Learning based Recommender Systems (GLRS) employ advanced graph learning approaches to model users' preferences and intentions as well as items' characteristics for recommendations.
We provide a systematic review of GLRS, by discussing how they extract important knowledge from graph-based representations to improve the accuracy, reliability and explainability of the recommendations.
arXiv Detail & Related papers (2021-05-13T14:50:45Z) - Deepened Graph Auto-Encoders Help Stabilize and Enhance Link Prediction [11.927046591097623]
Link prediction is a relatively under-studied graph learning task, with current state-of-the-art models based on one- or two-layers of shallow graph auto-encoder (GAE) architectures.
In this paper, we focus on addressing a limitation of current methods for link prediction, which can only use shallow GAEs and variational GAEs.
Our proposed methods innovatively incorporate standard auto-encoders (AEs) into the architectures of GAEs, where standard AEs are leveraged to learn essential, low-dimensional representations via seamlessly integrating the adjacency information and node features
arXiv Detail & Related papers (2021-03-21T14:43:10Z) - Embedding Graph Auto-Encoder for Graph Clustering [90.8576971748142]
Graph auto-encoder (GAE) models are based on semi-supervised graph convolution networks (GCN)
We design a specific GAE-based model for graph clustering to be consistent with the theory, namely Embedding Graph Auto-Encoder (EGAE)
EGAE consists of one encoder and dual decoders.
arXiv Detail & Related papers (2020-02-20T09:53:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.