Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning
- URL: http://arxiv.org/abs/2103.00771v1
- Date: Mon, 1 Mar 2021 05:52:57 GMT
- Title: Self-supervised Auxiliary Learning for Graph Neural Networks via
Meta-Learning
- Authors: Dasol Hwang, Jinyoung Park, Sunyoung Kwon, Kyung-Min Kim, Jung-Woo Ha,
and Hyunwoo j. Kim
- Abstract summary: We propose a novel self-supervised auxiliary learning framework to effectively learn graph neural networks.
Our method is learning to learn a primary task with various auxiliary tasks to improve generalization performance.
Our methods can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
- Score: 16.847149163314462
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, graph neural networks (GNNs) have been widely adopted in
representation learning of graph-structured data and provided state-of-the-art
performance in various application such as link prediction and node
classification. Simultaneously, self-supervised learning has been studied to
some extent to leverage rich unlabeled data in representation learning on
graphs. However, employing self-supervision tasks as auxiliary tasks to assist
a primary task has been less explored in the literature on graphs. In this
paper, we propose a novel self-supervised auxiliary learning framework to
effectively learn graph neural networks. Moreover, we design first a meta-path
prediction as a self-supervised auxiliary task for heterogeneous graphs. Our
method is learning to learn a primary task with various auxiliary tasks to
improve generalization performance. The proposed method identifies an effective
combination of auxiliary tasks and automatically balances them to improve the
primary task. Our methods can be applied to any graph neural networks in a
plug-in manner without manual labeling or additional data. Also, it can be
extended to any other auxiliary tasks. Our experiments demonstrate that the
proposed method consistently improves the performance of link prediction and
node classification on heterogeneous graphs.
Related papers
- Universal Graph Continual Learning [22.010954622073598]
We focus on a universal approach wherein each data point in a task can be a node or a graph, and the task varies from node to graph classification.
We propose a novel method that enables graph neural networks to excel in this universal setting.
arXiv Detail & Related papers (2023-08-27T01:19:19Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - State of the Art and Potentialities of Graph-level Learning [54.68482109186052]
Graph-level learning has been applied to many tasks including comparison, regression, classification, and more.
Traditional approaches to learning a set of graphs rely on hand-crafted features, such as substructures.
Deep learning has helped graph-level learning adapt to the growing scale of graphs by extracting features automatically and encoding graphs into low-dimensional representations.
arXiv Detail & Related papers (2023-01-14T09:15:49Z) - An Empirical Study of Retrieval-enhanced Graph Neural Networks [48.99347386689936]
Graph Neural Networks (GNNs) are effective tools for graph representation learning.
We propose a retrieval-enhanced scheme called GRAPHRETRIEVAL, which is agnostic to the choice of graph neural network models.
We conduct comprehensive experiments over 13 datasets, and we observe that GRAPHRETRIEVAL is able to reach substantial improvements over existing GNNs.
arXiv Detail & Related papers (2022-06-01T09:59:09Z) - Towards Unsupervised Deep Graph Structure Learning [67.58720734177325]
We propose an unsupervised graph structure learning paradigm, where the learned graph topology is optimized by data itself without any external guidance.
Specifically, we generate a learning target from the original data as an "anchor graph", and use a contrastive loss to maximize the agreement between the anchor graph and the learned graph.
arXiv Detail & Related papers (2022-01-17T11:57:29Z) - A Comprehensive Analytical Survey on Unsupervised and Semi-Supervised
Graph Representation Learning Methods [4.486285347896372]
This survey aims to evaluate all major classes of graph embedding methods.
We organized graph embedding techniques using a taxonomy that includes methods from manual feature engineering, matrix factorization, shallow neural networks, and deep graph convolutional networks.
We designed experiments on top of PyTorch Geometric and DGL libraries and run experiments on different multicore CPU and GPU platforms.
arXiv Detail & Related papers (2021-12-20T07:50:26Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - Self-supervised Auxiliary Learning with Meta-paths for Heterogeneous
Graphs [21.617020380894488]
We propose a novel self-supervised auxiliary learning method to learn graph neural networks on heterogeneous graphs.
Our method can be applied to any graph neural networks in a plug-in manner without manual labeling or additional data.
arXiv Detail & Related papers (2020-07-16T12:32:11Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.