GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction
- URL: http://arxiv.org/abs/2308.09663v1
- Date: Fri, 18 Aug 2023 16:30:51 GMT
- Title: GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction
- Authors: Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
- Abstract summary: Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
- Score: 76.35904458027694
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Self-supervised learning with masked autoencoders has recently gained
popularity for its ability to produce effective image or textual
representations, which can be applied to various downstream tasks without
retraining. However, we observe that the current masked autoencoder models lack
good generalization ability on graph data. To tackle this issue, we propose a
novel graph masked autoencoder framework called GiGaMAE. Different from
existing masked autoencoders that learn node presentations by explicitly
reconstructing the original graph components (e.g., features or edges), in this
paper, we propose to collaboratively reconstruct informative and integrated
latent embeddings. By considering embeddings encompassing graph topology and
attribute information as reconstruction targets, our model could capture more
generalized and comprehensive knowledge. Furthermore, we introduce a mutual
information based reconstruction loss that enables the effective reconstruction
of multiple targets. This learning objective allows us to differentiate between
the exclusive knowledge learned from a single target and common knowledge
shared by multiple targets. We evaluate our method on three downstream tasks
with seven datasets as benchmarks. Extensive experiments demonstrate the
superiority of GiGaMAE against state-of-the-art baselines. We hope our results
will shed light on the design of foundation models on graph-structured data.
Our code is available at: https://github.com/sycny/GiGaMAE.
Related papers
- Hi-GMAE: Hierarchical Graph Masked Autoencoders [90.30572554544385]
Hierarchical Graph Masked AutoEncoders (Hi-GMAE)
Hi-GMAE is a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs.
Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
arXiv Detail & Related papers (2024-05-17T09:08:37Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner [28.321233121613112]
Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
arXiv Detail & Related papers (2023-04-10T17:25:50Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders [32.42097625708298]
MaskGAE is a self-supervised learning framework for graph-structured data.
MGM is a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure.
We establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs.
arXiv Detail & Related papers (2022-05-20T09:45:57Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.