Multi-task Self-supervised Graph Neural Networks Enable Stronger Task
Generalization
- URL: http://arxiv.org/abs/2210.02016v1
- Date: Wed, 5 Oct 2022 04:09:38 GMT
- Title: Multi-task Self-supervised Graph Neural Networks Enable Stronger Task
Generalization
- Authors: Mingxuan Ju, Tong Zhao, Qianlong Wen, Wenhao Yu, Neil Shah, Yanfang
Ye, Chuxu Zhang
- Abstract summary: Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted increasing attention from the machine learning community in recent years.
One weakness of conventional SSL frameworks for GNNs is that they learn through a single philosophy.
- Score: 40.265515914447924
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Self-supervised learning (SSL) for graph neural networks (GNNs) has attracted
increasing attention from the graph machine learning community in recent years,
owing to its capability to learn performant node embeddings without costly
label information. One weakness of conventional SSL frameworks for GNNs is that
they learn through a single philosophy, such as mutual information maximization
or generative reconstruction. When applied to various downstream tasks, these
frameworks rarely perform equally well for every task, because one philosophy
may not span the extensive knowledge required for all tasks. In light of this,
we introduce ParetoGNN, a multi-task SSL framework for node representation
learning over graphs. Specifically, ParetoGNN is self-supervised by manifold
pretext tasks observing multiple philosophies. To reconcile different
philosophies, we explore a multiple-gradient descent algorithm, such that
ParetoGNN actively learns from every pretext task while minimizing potential
conflicts. We conduct comprehensive experiments over four downstream tasks
(i.e., node classification, node clustering, link prediction, and partition
prediction), and our proposal achieves the best overall performance across
tasks on 11 widely adopted benchmark datasets. Besides, we observe that
learning from multiple philosophies enhances not only the task generalization
but also the single task performance, demonstrating that ParetoGNN achieves
better task generalization via the disjoint yet complementary knowledge learned
from different philosophies.
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach [5.629161809575013]
We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
arXiv Detail & Related papers (2022-01-10T12:58:46Z) - Automated Self-Supervised Learning for Graphs [37.14382990139527]
This work aims to investigate how to automatically leverage multiple pretext tasks effectively.
We make use of a key principle of many real-world graphs, i.e., homophily, as the guidance to effectively search various self-supervised pretext tasks.
We propose the AutoSSL framework which can automatically search over combinations of various self-supervised tasks.
arXiv Detail & Related papers (2021-06-10T03:09:20Z) - A Meta-Learning Approach for Graph Representation Learning in Multi-Task
Settings [7.025709586759655]
We propose a novel meta-learning strategy capable of producing multi-task node embeddings.
We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than classically trained models.
arXiv Detail & Related papers (2020-12-12T08:36:47Z) - Self-supervised Learning on Graphs: Deep Insights and New Direction [66.78374374440467]
Self-supervised learning (SSL) aims to create domain specific pretext tasks on unlabeled data.
There are increasing interests in generalizing deep learning to the graph domain in the form of graph neural networks (GNNs)
arXiv Detail & Related papers (2020-06-17T20:30:04Z) - Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural
Network [4.121467410954028]
We propose a Hierarchical Graph Neural Network to learn augmented features for deep multi-task learning.
Experiments on real-world datastes show the significant performance improvement when using this strategy.
arXiv Detail & Related papers (2020-02-12T06:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.