Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach
- URL: http://arxiv.org/abs/2201.03326v1
- Date: Mon, 10 Jan 2022 12:58:46 GMT
- Title: Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach
- Authors: Davide Buffelli, Fabio Vandin
- Abstract summary: We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
- Score: 5.629161809575013
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) have become the state-of-the-art method for many
applications on graph structured data. GNNs are a framework for graph
representation learning, where a model learns to generate low dimensional node
embeddings that encapsulate structural and feature-related information. GNNs
are usually trained in an end-to-end fashion, leading to highly specialized
node embeddings. While this approach achieves great results in the single-task
setting, generating node embeddings that can be used to perform multiple tasks
(with performance comparable to single-task models) is still an open problem.
We propose a novel training strategy for graph representation learning, based
on meta-learning, which allows the training of a GNN model capable of producing
multi-task node embeddings. Our method avoids the difficulties arising when
learning to perform multiple tasks concurrently by, instead, learning to
quickly (i.e. with a few steps of gradient descent) adapt to multiple tasks
singularly. We show that the embeddings produced by a model trained with our
method can be used to perform multiple tasks with comparable or, surprisingly,
even higher performance than both single-task and multi-task end-to-end models.
Related papers
- Can Graph Learning Improve Planning in LLM-based Agents? [61.47027387839096]
Task planning in language agents is emerging as an important research topic alongside the development of large language models (LLMs)
In this paper, we explore graph learning-based methods for task planning, a direction that is to the prevalent focus on prompt design.
Our interest in graph learning stems from a theoretical discovery: the biases of attention and auto-regressive loss impede LLMs' ability to effectively navigate decision-making on graphs.
arXiv Detail & Related papers (2024-05-29T14:26:24Z) - ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - All in One: Multi-task Prompting for Graph Neural Networks [30.457491401821652]
We propose a novel multi-task prompting method for graph models.
We first unify the format of graph prompts and language prompts with the prompt token, token structure, and inserting pattern.
We then study the task space of various graph applications and reformulate downstream problems to the graph-level task.
arXiv Detail & Related papers (2023-07-04T06:27:31Z) - Task-Equivariant Graph Few-shot Learning [7.78018583713337]
It is important for Graph Neural Networks (GNNs) to be able to classify nodes with a limited number of labeled nodes, known as few-shot node classification.
We propose a new approach, the Task-Equivariant Graph few-shot learning (TEG) framework.
Our TEG framework enables the model to learn transferable task-adaptation strategies using a limited number of training meta-tasks.
arXiv Detail & Related papers (2023-05-30T05:47:28Z) - Reinforced Continual Learning for Graphs [18.64268861430314]
This paper proposes a graph continual learning strategy that combines the architecture-based and memory-based approaches.
It is numerically validated with several graph continual learning benchmark problems in both task-incremental learning and class-incremental learning settings.
arXiv Detail & Related papers (2022-09-04T07:49:59Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - A Meta-Learning Approach for Graph Representation Learning in Multi-Task
Settings [7.025709586759655]
We propose a novel meta-learning strategy capable of producing multi-task node embeddings.
We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than classically trained models.
arXiv Detail & Related papers (2020-12-12T08:36:47Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.