What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders
- URL: http://arxiv.org/abs/2205.10053v2
- Date: Mon, 29 May 2023 09:00:30 GMT
- Title: What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders
- Authors: Jintang Li, Ruofan Wu, Wangbin Sun, Liang Chen, Sheng Tian, Liang Zhu,
Changhua Meng, Zibin Zheng, Weiqiang Wang
- Abstract summary: MaskGAE is a self-supervised learning framework for graph-structured data.
MGM is a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure.
We establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs.
- Score: 32.42097625708298
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The last years have witnessed the emergence of a promising self-supervised
learning strategy, referred to as masked autoencoding. However, there is a lack
of theoretical understanding of how masking matters on graph autoencoders
(GAEs). In this work, we present masked graph autoencoder (MaskGAE), a
self-supervised learning framework for graph-structured data. Different from
standard GAEs, MaskGAE adopts masked graph modeling (MGM) as a principled
pretext task - masking a portion of edges and attempting to reconstruct the
missing part with partially visible, unmasked graph structure. To understand
whether MGM can help GAEs learn better representations, we provide both
theoretical and empirical evidence to comprehensively justify the benefits of
this pretext task. Theoretically, we establish close connections between GAEs
and contrastive learning, showing that MGM significantly improves the
self-supervised learning scheme of GAEs. Empirically, we conduct extensive
experiments on a variety of graph benchmarks, demonstrating the superiority of
MaskGAE over several state-of-the-arts on both link prediction and node
classification tasks.
Related papers
- Revisiting and Benchmarking Graph Autoencoders: A Contrastive Learning Perspective [28.152560472541143]
Graph autoencoders (GAEs) are self-supervised learning models that can learn meaningful representations of graph-structured data.
We introduce lrGAE, a general and powerful GAE framework that leverages contrastive learning principles to learn meaningful representations.
arXiv Detail & Related papers (2024-10-14T07:59:30Z) - Hi-GMAE: Hierarchical Graph Masked Autoencoders [90.30572554544385]
Hierarchical Graph Masked AutoEncoders (Hi-GMAE)
Hi-GMAE is a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs.
Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
arXiv Detail & Related papers (2024-05-17T09:08:37Z) - Rethinking Graph Masked Autoencoders through Alignment and Uniformity [26.86368034133612]
Self-supervised learning on graphs can be bifurcated into contrastive and generative methods.
Recent advent of graph masked autoencoder (GraphMAE) rekindles momentum behind generative methods.
arXiv Detail & Related papers (2024-02-11T15:21:08Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules [81.05116895430375]
Masked graph modeling excels in the self-supervised representation learning of molecular graphs.
We show that a subgraph-level tokenizer and a sufficiently expressive decoder with remask decoding have a large impact on the encoder's representation learning.
We propose a novel MGM method SimSGT, featuring a Simple GNN-based Tokenizer (SGT) and an effective decoding strategy.
arXiv Detail & Related papers (2023-10-23T09:40:30Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.