Rethinking Graph Masked Autoencoders through Alignment and Uniformity
- URL: http://arxiv.org/abs/2402.07225v1
- Date: Sun, 11 Feb 2024 15:21:08 GMT
- Title: Rethinking Graph Masked Autoencoders through Alignment and Uniformity
- Authors: Liang Wang, Xiang Tao, Qiang Liu, Shu Wu, Liang Wang
- Abstract summary: Self-supervised learning on graphs can be bifurcated into contrastive and generative methods.
Recent advent of graph masked autoencoder (GraphMAE) rekindles momentum behind generative methods.
- Score: 26.86368034133612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-supervised learning on graphs can be bifurcated into contrastive and
generative methods. Contrastive methods, also known as graph contrastive
learning (GCL), have dominated graph self-supervised learning in the past few
years, but the recent advent of graph masked autoencoder (GraphMAE) rekindles
the momentum behind generative methods. Despite the empirical success of
GraphMAE, there is still a dearth of theoretical understanding regarding its
efficacy. Moreover, while both generative and contrastive methods have been
shown to be effective, their connections and differences have yet to be
thoroughly investigated. Therefore, we theoretically build a bridge between
GraphMAE and GCL, and prove that the node-level reconstruction objective in
GraphMAE implicitly performs context-level GCL. Based on our theoretical
analysis, we further identify the limitations of the GraphMAE from the
perspectives of alignment and uniformity, which have been considered as two key
properties of high-quality representations in GCL. We point out that GraphMAE's
alignment performance is restricted by the masking strategy, and the uniformity
is not strictly guaranteed. To remedy the aforementioned limitations, we
propose an Alignment-Uniformity enhanced Graph Masked AutoEncoder, named
AUG-MAE. Specifically, we propose an easy-to-hard adversarial masking strategy
to provide hard-to-align samples, which improves the alignment performance.
Meanwhile, we introduce an explicit uniformity regularizer to ensure the
uniformity of the learned representations. Experimental results on benchmark
datasets demonstrate the superiority of our model over existing
state-of-the-art methods.
Related papers
- Preserving Node Distinctness in Graph Autoencoders via Similarity Distillation [9.395697548237333]
Graph autoencoders (GAEs) rely on distance-based criteria, such as mean-square-error (MSE) to reconstruct the input graph.
relying solely on a single reconstruction criterion may lead to a loss of distinctiveness in the reconstructed graph.
We have developed a simple yet effective strategy to preserve the necessary distinctness in the reconstructed graph.
arXiv Detail & Related papers (2024-06-25T12:54:35Z) - Hierarchical Topology Isomorphism Expertise Embedded Graph Contrastive
Learning [37.0788516033498]
We propose a novel hierarchical topology isomorphism expertise embedded graph contrastive learning.
We empirically demonstrate that the proposed method is universal to multiple state-of-the-art GCL models.
Our method beats the state-of-the-art method by 0.23% on unsupervised representation learning setting.
arXiv Detail & Related papers (2023-12-21T14:07:46Z) - VIGraph: Generative Self-supervised Learning for Class-Imbalanced Node Classification [9.686218058331061]
Class imbalance in graph data presents significant challenges for node classification.
Existing methods, such as SMOTE-based approaches, exhibit limitations in constructing imbalanced graphs.
We introduce VIGraph, a simple yet effective generative SSL approach that relies on the Variational GAE as the fundamental model.
arXiv Detail & Related papers (2023-11-02T12:36:19Z) - Generative and Contrastive Paradigms Are Complementary for Graph
Self-Supervised Learning [56.45977379288308]
Masked autoencoder (MAE) learns to reconstruct masked graph edges or node features.
Contrastive Learning (CL) maximizes the similarity between augmented views of the same graph.
We propose graph contrastive masked autoencoder (GCMAE) framework to unify MAE and CL.
arXiv Detail & Related papers (2023-10-24T05:06:06Z) - Localized Contrastive Learning on Graphs [110.54606263711385]
We introduce a simple yet effective contrastive model named Localized Graph Contrastive Learning (Local-GCL)
In spite of its simplicity, Local-GCL achieves quite competitive performance in self-supervised node representation learning tasks on graphs with various scales and properties.
arXiv Detail & Related papers (2022-12-08T23:36:00Z) - Single-Pass Contrastive Learning Can Work for Both Homophilic and
Heterophilic Graph [60.28340453547902]
Graph contrastive learning (GCL) techniques typically require two forward passes for a single instance to construct the contrastive loss.
Existing GCL approaches fail to provide strong performance guarantees.
We implement the Single-Pass Graph Contrastive Learning method (SP-GCL)
Empirically, the features learned by the SP-GCL can match or outperform existing strong baselines with significantly less computational overhead.
arXiv Detail & Related papers (2022-11-20T07:18:56Z) - Unifying Graph Contrastive Learning with Flexible Contextual Scopes [57.86762576319638]
We present a self-supervised learning method termed Unifying Graph Contrastive Learning with Flexible Contextual Scopes (UGCL for short)
Our algorithm builds flexible contextual representations with contextual scopes by controlling the power of an adjacency matrix.
Based on representations from both local and contextual scopes, distL optimises a very simple contrastive loss function for graph representation learning.
arXiv Detail & Related papers (2022-10-17T07:16:17Z) - GraphMAE: Self-Supervised Masked Graph Autoencoders [52.06140191214428]
We present a masked graph autoencoder GraphMAE that mitigates issues for generative self-supervised graph learning.
We conduct extensive experiments on 21 public datasets for three different graph learning tasks.
The results manifest that GraphMAE--a simple graph autoencoder with our careful designs--can consistently generate outperformance over both contrastive and generative state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-22T11:57:08Z) - What's Behind the Mask: Understanding Masked Graph Modeling for Graph
Autoencoders [32.42097625708298]
MaskGAE is a self-supervised learning framework for graph-structured data.
MGM is a principled pretext task - masking a portion of edges and attempting to reconstruct the missing part with partially visible, unmasked graph structure.
We establish close connections between GAEs and contrastive learning, showing that MGM significantly improves the self-supervised learning scheme of GAEs.
arXiv Detail & Related papers (2022-05-20T09:45:57Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Graph Contrastive Learning with Adaptive Augmentation [23.37786673825192]
We propose a novel graph contrastive representation learning method with adaptive augmentation.
Specifically, we design augmentation schemes based on node centrality measures to highlight important connective structures.
Our proposed method consistently outperforms existing state-of-the-art baselines and even surpasses some supervised counterparts.
arXiv Detail & Related papers (2020-10-27T15:12:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.