Empowering Dual-Level Graph Self-Supervised Pretraining with Motif
Discovery
- URL: http://arxiv.org/abs/2312.11927v1
- Date: Tue, 19 Dec 2023 08:09:36 GMT
- Title: Empowering Dual-Level Graph Self-Supervised Pretraining with Motif
Discovery
- Authors: Pengwei Yan, Kaisong Song, Zhuoren Jiang, Yangyang Kang, Tianqianjin
Lin, Changlong Sun, Xiaozhong Liu
- Abstract summary: We introduce Dual-level Graph self-supervised Pretraining with Motif discovery (DGPM)
DGPM orchestrates node-level and subgraph-level pretext tasks.
Experiments on 15 datasets validate DGPM's effectiveness and generalizability.
- Score: 28.38130326794833
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: While self-supervised graph pretraining techniques have shown promising
results in various domains, their application still experiences challenges of
limited topology learning, human knowledge dependency, and incompetent
multi-level interactions. To address these issues, we propose a novel solution,
Dual-level Graph self-supervised Pretraining with Motif discovery (DGPM), which
introduces a unique dual-level pretraining structure that orchestrates
node-level and subgraph-level pretext tasks. Unlike prior approaches, DGPM
autonomously uncovers significant graph motifs through an edge pooling module,
aligning learned motif similarities with graph kernel-based similarities. A
cross-matching task enables sophisticated node-motif interactions and novel
representation learning. Extensive experiments on 15 datasets validate DGPM's
effectiveness and generalizability, outperforming state-of-the-art methods in
unsupervised representation learning and transfer learning settings. The
autonomously discovered motifs demonstrate the potential of DGPM to enhance
robustness and interpretability.
Related papers
- Disentangled Generative Graph Representation Learning [51.59824683232925]
This paper introduces DiGGR (Disentangled Generative Graph Representation Learning), a self-supervised learning framework.
It aims to learn latent disentangled factors and utilize them to guide graph mask modeling.
Experiments on 11 public datasets for two different graph learning tasks demonstrate that DiGGR consistently outperforms many previous self-supervised methods.
arXiv Detail & Related papers (2024-08-24T05:13:02Z) - Robust Training of Federated Models with Extremely Label Deficiency [84.00832527512148]
Federated semi-supervised learning (FSSL) has emerged as a powerful paradigm for collaboratively training machine learning models using distributed data with label deficiency.
We propose a novel twin-model paradigm, called Twin-sight, designed to enhance mutual guidance by providing insights from different perspectives of labeled and unlabeled data.
Our comprehensive experiments on four benchmark datasets provide substantial evidence that Twin-sight can significantly outperform state-of-the-art methods across various experimental settings.
arXiv Detail & Related papers (2024-02-22T10:19:34Z) - Isomorphic-Consistent Variational Graph Auto-Encoders for Multi-Level
Graph Representation Learning [9.039193854524763]
We propose the Isomorphic-Consistent VGAE (IsoC-VGAE) for task-agnostic graph representation learning.
We first devise a decoding scheme to provide a theoretical guarantee of keeping the isomorphic consistency.
We then propose the Inverse Graph Neural Network (Inv-GNN) decoder as its intuitive realization.
arXiv Detail & Related papers (2023-12-09T10:16:53Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Self-supervised Heterogeneous Graph Pre-training Based on Structural
Clustering [20.985559149384795]
We present SHGP, a novel Self-supervised Heterogeneous Graph Pre-training approach.
It does not need to generate any positive examples or negative examples.
It is superior to state-of-the-art unsupervised baselines and even semi-supervised baselines.
arXiv Detail & Related papers (2022-10-19T10:55:48Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Deep Graph Contrastive Representation Learning [23.37786673825192]
We propose a novel framework for unsupervised graph representation learning by leveraging a contrastive objective at the node level.
Specifically, we generate two graph views by corruption and learn node representations by maximizing the agreement of node representations in these two views.
We perform empirical experiments on both transductive and inductive learning tasks using a variety of real-world datasets.
arXiv Detail & Related papers (2020-06-07T11:50:45Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.