GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner
- URL: http://arxiv.org/abs/2304.04779v1
- Date: Mon, 10 Apr 2023 17:25:50 GMT
- Title: GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner
- Authors: Zhenyu Hou, Yufei He, Yukuo Cen, Xiao Liu, Yuxiao Dong, Evgeny
Kharlamov, Jie Tang
- Abstract summary: Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
- Score: 28.321233121613112
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graph self-supervised learning (SSL), including contrastive and generative
approaches, offers great potential to address the fundamental challenge of
label scarcity in real-world graph data. Among both sets of graph SSL
techniques, the masked graph autoencoders (e.g., GraphMAE)--one type of
generative method--have recently produced promising results. The idea behind
this is to reconstruct the node features (or structures)--that are randomly
masked from the input--with the autoencoder architecture. However, the
performance of masked feature reconstruction naturally relies on the
discriminability of the input features and is usually vulnerable to disturbance
in the features. In this paper, we present a masked self-supervised learning
framework GraphMAE2 with the goal of overcoming this issue. The idea is to
impose regularization on feature reconstruction for graph SSL. Specifically, we
design the strategies of multi-view random re-mask decoding and latent
representation prediction to regularize the feature reconstruction. The
multi-view random re-mask decoding is to introduce randomness into
reconstruction in the feature space, while the latent representation prediction
is to enforce the reconstruction in the embedding space. Extensive experiments
show that GraphMAE2 can consistently generate top results on various public
datasets, including at least 2.45% improvements over state-of-the-art baselines
on ogbn-Papers100M with 111M nodes and 1.6B edges.
Related papers
- UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Deep Manifold Graph Auto-Encoder for Attributed Graph Embedding [51.75091298017941]
This paper proposes a novel Deep Manifold (Variational) Graph Auto-Encoder (DMVGAE/DMGAE) for attributed graph data.
The proposed method surpasses state-of-the-art baseline algorithms by a significant margin on different downstream tasks across popular datasets.
arXiv Detail & Related papers (2024-01-12T17:57:07Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - Heterogeneous Graph Masked Autoencoders [27.312282694217462]
We study the problem of generative SSL on heterogeneous graphs and propose HGMAE to address these challenges.
HGMAE captures comprehensive graph information via two innovative masking techniques and three unique training strategies.
In particular, we first develop metapath masking and adaptive attribute masking with dynamic mask rate to enable effective and stable learning on heterogeneous graphs.
arXiv Detail & Related papers (2022-08-21T20:33:05Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.