Self-supervised Learning for Heterogeneous Graph via Structure
Information based on Metapath
- URL: http://arxiv.org/abs/2209.04218v1
- Date: Fri, 9 Sep 2022 10:06:18 GMT
- Title: Self-supervised Learning for Heterogeneous Graph via Structure
Information based on Metapath
- Authors: Shuai Ma, Jian-wei Liu, Xin Zuo
- Abstract summary: Self-supervised representation learning is a potential approach to tackle this problem.
In this paper, we propose a SElfsupervised learning method for heterogeneous graph via Structure Information based on Metapath.
In order to predict jump number, SESIM uses data itself to generate labels, avoiding time-consuming manual labeling.
- Score: 9.757299837675204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: graph neural networks (GNNs) are the dominant paradigm for modeling and
handling graph structure data by learning universal node representation. The
traditional way of training GNNs depends on a great many labeled data, which
results in high requirements on cost and time. In some special scene, it is
even unavailable and impracticable. Self-supervised representation learning,
which can generate labels by graph structure data itself, is a potential
approach to tackle this problem. And turning to research on self-supervised
learning problem for heterogeneous graphs is more challenging than dealing with
homogeneous graphs, also there are fewer studies about it. In this paper, we
propose a SElfsupervised learning method for heterogeneous graph via Structure
Information based on Metapath (SESIM). The proposed model can construct pretext
tasks by predicting jump number between nodes in each metapath to improve the
representation ability of primary task. In order to predict jump number, SESIM
uses data itself to generate labels, avoiding time-consuming manual labeling.
Moreover, predicting jump number in each metapath can effectively utilize graph
structure information, which is the essential property between nodes.
Therefore, SESIM deepens the understanding of models for graph structure. At
last, we train primary task and pretext tasks jointly, and use meta-learning to
balance the contribution of pretext tasks for primary task. Empirical results
validate the performance of SESIM method and demonstrate that this method can
improve the representation ability of traditional neural networks on link
prediction task and node classification task.
Related papers
- CLEAR: Cluster-based Prompt Learning on Heterogeneous Graphs [19.956925820094177]
We present CLEAR, a Cluster-based prompt model on heterogeneous graphs.
We align the pretext and downstream tasks to share the same training objective.
Experiments on downstream tasks confirm the superiority of CLEAR.
arXiv Detail & Related papers (2025-02-13T03:10:19Z) - Data-Driven Self-Supervised Graph Representation Learning [0.0]
Self-supervised graph representation learning (SSGRL) is a representation learning paradigm used to reduce or avoid manual labeling.
We propose a novel data-driven SSGRL approach that automatically learns a suitable graph augmentation from the signal encoded in the graph.
We perform extensive experiments on node classification and graph property prediction.
arXiv Detail & Related papers (2024-12-24T10:04:19Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - GraphGLOW: Universal and Generalizable Structure Learning for Graph
Neural Networks [72.01829954658889]
This paper introduces the mathematical definition of this novel problem setting.
We devise a general framework that coordinates a single graph-shared structure learner and multiple graph-specific GNNs.
The well-trained structure learner can directly produce adaptive structures for unseen target graphs without any fine-tuning.
arXiv Detail & Related papers (2023-06-20T03:33:22Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - SHGNN: Structure-Aware Heterogeneous Graph Neural Network [77.78459918119536]
This paper proposes a novel Structure-Aware Heterogeneous Graph Neural Network (SHGNN) to address the above limitations.
We first utilize a feature propagation module to capture the local structure information of intermediate nodes in the meta-path.
Next, we use a tree-attention aggregator to incorporate the graph structure information into the aggregation module on the meta-path.
Finally, we leverage a meta-path aggregator to fuse the information aggregated from different meta-paths.
arXiv Detail & Related papers (2021-12-12T14:18:18Z) - Higher-Order Attribute-Enhancing Heterogeneous Graph Neural Networks [67.25782890241496]
We propose a higher-order Attribute-Enhancing Graph Neural Network (HAEGNN) for heterogeneous network representation learning.
HAEGNN simultaneously incorporates meta-paths and meta-graphs for rich, heterogeneous semantics.
It shows superior performance against the state-of-the-art methods in node classification, node clustering, and visualization.
arXiv Detail & Related papers (2021-04-16T04:56:38Z) - Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous
Graphs [21.617020380894488]
We propose a novel self-supervised auxiliary learning method to learn graph neural networks on heterogeneous graphs.
Our method can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2020-07-16T12:32:11Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.