Heterogeneous Graph Masked Autoencoders
- URL: http://arxiv.org/abs/2208.09957v1
- Date: Sun, 21 Aug 2022 20:33:05 GMT
- Title: Heterogeneous Graph Masked Autoencoders
- Authors: Yijun Tian, Kaiwen Dong, Chunhui Zhang, Chuxu Zhang, Nitesh V. Chawla
- Abstract summary: We study the problem of generative SSL on heterogeneous graphs and propose HGMAE to address these challenges.
HGMAE captures comprehensive graph information via two innovative masking techniques and three unique training strategies.
In particular, we first develop metapath masking and adaptive attribute masking with dynamic mask rate to enable effective and stable learning on heterogeneous graphs.
- Score: 27.312282694217462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative self-supervised learning (SSL), especially masked autoencoders,
has become one of the most exciting learning paradigms and has shown great
potential in handling graph data. However, real-world graphs are always
heterogeneous, which poses three critical challenges that existing methods
ignore: 1) how to capture complex graph structure? 2) how to incorporate
various node attributes? and 3) how to encode different node positions? In
light of this, we study the problem of generative SSL on heterogeneous graphs
and propose HGMAE, a novel heterogeneous graph masked autoencoder model to
address these challenges. HGMAE captures comprehensive graph information via
two innovative masking techniques and three unique training strategies. In
particular, we first develop metapath masking and adaptive attribute masking
with dynamic mask rate to enable effective and stable learning on heterogeneous
graphs. We then design several training strategies including metapath-based
edge reconstruction to adopt complex structural information, target attribute
restoration to incorporate various node attributes, and positional feature
prediction to encode node positional information. Extensive experiments
demonstrate that HGMAE outperforms both contrastive and generative
state-of-the-art baselines on several tasks across multiple datasets.
Related papers
- Hi-GMAE: Hierarchical Graph Masked Autoencoders [90.30572554544385]
Hierarchical Graph Masked AutoEncoders (Hi-GMAE)
Hi-GMAE is a novel multi-scale GMAE framework designed to handle the hierarchical structures within graphs.
Our experiments on 15 graph datasets consistently demonstrate that Hi-GMAE outperforms 17 state-of-the-art self-supervised competitors.
arXiv Detail & Related papers (2024-05-17T09:08:37Z) - UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Graph Transformer GANs with Graph Masked Modeling for Architectural
Layout Generation [153.92387500677023]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The proposed graph Transformer encoder combines graph convolutions and self-attentions in a Transformer to model both local and global interactions.
We also propose a novel self-guided pre-training method for graph representation learning.
arXiv Detail & Related papers (2024-01-15T14:36:38Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner [28.321233121613112]
Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
arXiv Detail & Related papers (2023-04-10T17:25:50Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.