GraphMAE: Self-Supervised Masked Graph Autoencoders
- URL: http://arxiv.org/abs/2205.10803v2
- Date: Tue, 24 May 2022 13:52:48 GMT
- Title: GraphMAE: Self-Supervised Masked Graph Autoencoders
- Authors: Zhenyu Hou, Xiao Liu, Yukuo Cen, Yuxiao Dong, Hongxia Yang, Chunjie
Wang, Jie Tang
- Abstract summary: We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
- Score: 52.06140191214428
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning (SSL) has been extensively explored in recent years.
Particularly, generative SSL has seen emerging success in natural language
processing and other fields, such as the wide adoption of BERT and GPT. Despite
this, contrastive learning--which heavily relies on structural data
augmentation and complicated training strategies--has been the dominant
approach in graph SSL, while the progress of generative SSL on graphs,
especially graph autoencoders (GAEs), has thus far not reached the potential as
promised in other fields. In this paper, we identify and examine the issues
that negatively impact the development of GAEs, including their reconstruction
objective, training robustness, and error metric. We present a masked graph
autoencoder GraphMAE that mitigates these issues for generative self-supervised
graph learning. Instead of reconstructing structures, we propose to focus on
feature reconstruction with both a masking strategy and scaled cosine error
that benefit the robust training of GraphMAE. We conduct extensive experiments
on 21 public datasets for three different graph learning tasks. The results
manifest that GraphMAE--a simple graph autoencoder with our careful
designs--can consistently generate outperformance over both contrastive and
generative state-of-the-art baselines. This study provides an understanding of
graph autoencoders and demonstrates the potential of generative self-supervised
learning on graphs.
Related papers
- Informative Subgraphs Aware Masked Auto-Encoder in Dynamic Graphs [1.3571543090749625]
We introduce a constrained probabilistic generative model to generate informative subgraphs that guide the evolution of dynamic graphs.
The informative subgraph identified by DyGIS will serve as the input of dynamic graph masked autoencoder (DGMAE)
arXiv Detail & Related papers (2024-09-14T02:16:00Z) - Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Towards Graph Contrastive Learning: A Survey and Beyond [23.109430624817637]
Self-supervised learning (SSL) on graphs has gained increasing attention and has made significant progress.
SSL enables machine learning models to produce informative representations from unlabeled graph data.
Graph Contrastive Learning (GCL) has not been thoroughly investigated in the existing literature.
arXiv Detail & Related papers (2024-05-20T08:19:10Z) - GraphEdit: Large Language Models for Graph Structure Learning [62.618818029177355]
Graph Structure Learning (GSL) focuses on capturing intrinsic dependencies and interactions among nodes in graph-structured data.
Existing GSL methods heavily depend on explicit graph structural information as supervision signals.
We propose GraphEdit, an approach that leverages large language models (LLMs) to learn complex node relationships in graph-structured data.
arXiv Detail & Related papers (2024-02-23T08:29:42Z) - GraphGPT: Graph Instruction Tuning for Large Language Models [27.036935149004726]
Graph Neural Networks (GNNs) have evolved to understand graph structures.
To enhance robustness, self-supervised learning (SSL) has become a vital tool for data augmentation.
Our research tackles this by advancing graph model generalization in zero-shot learning environments.
arXiv Detail & Related papers (2023-10-19T06:17:46Z) - GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner [28.321233121613112]
Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
arXiv Detail & Related papers (2023-04-10T17:25:50Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - Graph Self-Supervised Learning: A Survey [73.86209411547183]
Self-supervised learning (SSL) has become a promising and trending learning paradigm for graph data.
We present a timely and comprehensive review of the existing approaches which employ SSL techniques for graph data.
arXiv Detail & Related papers (2021-02-27T03:04:21Z) - Unsupervised Graph Embedding via Adaptive Graph Learning [85.28555417981063]
Graph autoencoders (GAEs) are powerful tools in representation learning for graph embedding.
In this paper, two novel unsupervised graph embedding methods, unsupervised graph embedding via adaptive graph learning (BAGE) and unsupervised graph embedding via variational adaptive graph learning (VBAGE) are proposed.
Experimental studies on several datasets validate our design and demonstrate that our methods outperform baselines by a wide margin in node clustering, node classification, and graph visualization tasks.
arXiv Detail & Related papers (2020-03-10T02:33:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.