MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs
- URL: http://arxiv.org/abs/2201.02534v1
- Date: Fri, 7 Jan 2022 16:48:07 GMT
- Title: MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs
- Authors: Qiaoyu Tan, Ninghao Liu, Xiao Huang, Rui Chen, Soo-Hyun Choi, Xia Hu
- Abstract summary: Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
- Score: 55.66953093401889
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a novel masked graph autoencoder (MGAE) framework to perform
effective learning on graph structure data. Taking insights from
self-supervised learning, we randomly mask a large proportion of edges and try
to reconstruct these missing edges during training. MGAE has two core designs.
First, we find that masking a high ratio of the input graph structure, e.g.,
$70\%$, yields a nontrivial and meaningful self-supervisory task that benefits
downstream applications. Second, we employ a graph neural network (GNN) as an
encoder to perform message propagation on the partially-masked graph. To
reconstruct the large number of masked edges, a tailored cross-correlation
decoder is proposed. It could capture the cross-correlation between the head
and tail nodes of anchor edge in multi-granularity. Coupling these two designs
enables MGAE to be trained efficiently and effectively. Extensive experiments
on multiple open datasets (Planetoid and OGB benchmarks) demonstrate that MGAE
generally performs better than state-of-the-art unsupervised learning
competitors on link prediction and node classification.
Related papers
- Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - Hi-GMAE: Hierarchical Graph Masked Autoencoders [90.30572554544385]
Hierarchical Graph Masked AutoEncoders (Hi-GMAE)
Hi-GMAE is a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs.
Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
arXiv Detail & Related papers (2024-05-17T09:08:37Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - Rethinking Tokenizer and Decoder in Masked Graph Modeling for Molecules [81.05116895430375]
Masked graph modeling excels in the self-supervised representation learning of molecular graphs.
We show that a subgraph-level tokenizer and a sufficiently expressive decoder with remask decoding have a large impact on the encoder's representation learning.
We propose a novel MGM method SimSGT, featuring a Simple GNN-based Tokenizer (SGT) and an effective decoding strategy.
arXiv Detail & Related papers (2023-10-23T09:40:30Z) - GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner [28.321233121613112]
Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
arXiv Detail & Related papers (2023-04-10T17:25:50Z) - What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders [32.42097625708298]
MaskGAE is a self-supervised learning framework for graph-structured data.
MGM is a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure.
We establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs.
arXiv Detail & Related papers (2022-05-20T09:45:57Z) - Node Feature Extraction by Self-Supervised Multi-scale Neighborhood
Prediction [123.20238648121445]
We propose a new self-supervised learning framework, Graph Information Aided Node feature exTraction (GIANT)
GIANT makes use of the eXtreme Multi-label Classification (XMC) formalism, which is crucial for fine-tuning the language model based on graph information.
We demonstrate the superior performance of GIANT over the standard GNN pipeline on Open Graph Benchmark datasets.
arXiv Detail & Related papers (2021-10-29T19:55:12Z) - A Unified Lottery Ticket Hypothesis for Graph Neural Networks [82.31087406264437]
We present a unified GNN sparsification (UGS) framework that simultaneously prunes the graph adjacency matrix and the model weights.
We further generalize the popular lottery ticket hypothesis to GNNs for the first time, by defining a graph lottery ticket (GLT) as a pair of core sub-dataset and sparse sub-network.
arXiv Detail & Related papers (2021-02-12T21:52:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.