UGMAE: A Unified Framework for Graph Masked Autoencoders
- URL: http://arxiv.org/abs/2402.08023v1
- Date: Mon, 12 Feb 2024 19:39:26 GMT
- Title: UGMAE: A Unified Framework for Graph Masked Autoencoders
- Authors: Yijun Tian, Chuxu Zhang, Ziyi Kou, Zheyuan Liu, Xiangliang Zhang,
Nitesh V. Chawla
- Abstract summary: We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
- Score: 67.75493040186859
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative self-supervised learning on graphs, particularly graph masked
autoencoders, has emerged as a popular learning paradigm and demonstrated its
efficacy in handling non-Euclidean data. However, several remaining issues
limit the capability of existing methods: 1) the disregard of uneven node
significance in masking, 2) the underutilization of holistic graph information,
3) the ignorance of semantic knowledge in the representation space due to the
exclusive use of reconstruction loss in the output space, and 4) the unstable
reconstructions caused by the large volume of masked contents. In light of
this, we propose UGMAE, a unified framework for graph masked autoencoders to
address these issues from the perspectives of adaptivity, integrity,
complementarity, and consistency. Specifically, we first develop an adaptive
feature mask generator to account for the unique significance of nodes and
sample informative masks (adaptivity). We then design a ranking-based structure
reconstruction objective joint with feature reconstruction to capture holistic
graph information and emphasize the topological proximity between neighbors
(integrity). After that, we present a bootstrapping-based similarity module to
encode the high-level semantic knowledge in the representation space,
complementary to the low-level reconstruction in the output space
(complementarity). Finally, we build a consistency assurance module to provide
reconstruction objectives with extra stabilized consistency targets
(consistency). Extensive experiments demonstrate that UGMAE outperforms both
contrastive and generative state-of-the-art baselines on several tasks across
multiple datasets.
Related papers
- Mesh Denoising Transformer [104.5404564075393]
Mesh denoising is aimed at removing noise from input meshes while preserving their feature structures.
SurfaceFormer is a pioneering Transformer-based mesh denoising framework.
New representation known as Local Surface Descriptor captures local geometric intricacies.
Denoising Transformer module receives the multimodal information and achieves efficient global feature aggregation.
arXiv Detail & Related papers (2024-05-10T15:27:43Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent
Space Reconstruction [76.35904458027694]
Masked autoencoder models lack good generalization ability on graph data.
We propose a novel graph masked autoencoder framework called GiGaMAE.
Our results will shed light on the design of foundation models on graph-structured data.
arXiv Detail & Related papers (2023-08-18T16:30:51Z) - GraphMAE2: A Decoding-Enhanced Masked Self-Supervised Graph Learner [28.321233121613112]
Masked graph autoencoders (e.g., GraphMAE) have recently produced promising results.
We present a masked self-supervised learning framework GraphMAE2 with the goal of overcoming this issue.
We show that GraphMAE2 can consistently generate top results on various public datasets.
arXiv Detail & Related papers (2023-04-10T17:25:50Z) - RARE: Robust Masked Graph Autoencoder [45.485891794905946]
Masked graph autoencoder (MGAE) has emerged as a promising self-supervised graph pre-training (SGP) paradigm.
We propose a novel SGP method termed Robust mAsked gRaph autoEncoder (RARE) to improve the certainty in inferring masked data.
arXiv Detail & Related papers (2023-04-04T03:35:29Z) - GD-MAE: Generative Decoder for MAE Pre-training on LiDAR Point Clouds [72.60362979456035]
Masked Autoencoders (MAE) are challenging to explore in large-scale 3D point clouds.
We propose a textbfGenerative textbfDecoder for MAE (GD-MAE) to automatically merges the surrounding context.
We demonstrate the efficacy of the proposed method on several large-scale benchmarks: KITTI, and ONCE.
arXiv Detail & Related papers (2022-12-06T14:32:55Z) - Heterogeneous Graph Masked Autoencoders [27.312282694217462]
We study the problem of generative SSL on heterogeneous graphs and propose HGMAE to address these challenges.
HGMAE captures comprehensive graph information via two innovative masking techniques and three unique training strategies.
In particular, we first develop metapath masking and adaptive attribute masking with dynamic mask rate to enable effective and stable learning on heterogeneous graphs.
arXiv Detail & Related papers (2022-08-21T20:33:05Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.