A Meta-Learning Approach for Graph Representation Learning in Multi-Task
Settings
- URL: http://arxiv.org/abs/2012.06755v1
- Date: Sat, 12 Dec 2020 08:36:47 GMT
- Title: A Meta-Learning Approach for Graph Representation Learning in Multi-Task
Settings
- Authors: Davide Buffelli, Fabio Vandin
- Abstract summary: We propose a novel meta-learning strategy capable of producing multi-task node embeddings.
We show that the embeddings produced by our method can be used to perform multiple tasks with comparable or higher performance than classically trained models.
- Score: 7.025709586759655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Graph Neural Networks (GNNs) are a framework for graph representation
learning, where a model learns to generate low dimensional node embeddings that
encapsulate structural and feature-related information. GNNs are usually
trained in an end-to-end fashion, leading to highly specialized node
embeddings. However, generating node embeddings that can be used to perform
multiple tasks (with performance comparable to single-task models) is an open
problem. We propose a novel meta-learning strategy capable of producing
multi-task node embeddings. Our method avoids the difficulties arising when
learning to perform multiple tasks concurrently by, instead, learning to
quickly (i.e. with a few steps of gradient descent) adapt to multiple tasks
singularly. We show that the embeddings produced by our method can be used to
perform multiple tasks with comparable or higher performance than classically
trained models. Our method is model-agnostic and task-agnostic, thus applicable
to a wide variety of multi-task domains.
Related papers
- ULTRA-DP: Unifying Graph Pre-training with Multi-task Graph Dual Prompt [67.8934749027315]
We propose a unified framework for graph hybrid pre-training which injects the task identification and position identification into GNNs.
We also propose a novel pre-training paradigm based on a group of $k$-nearest neighbors.
arXiv Detail & Related papers (2023-10-23T12:11:13Z) - Task-Equivariant Graph Few-shot Learning [7.78018583713337]
It is important for Graph Neural Networks (GNNs) to be able to classify nodes with a limited number of labeled nodes, known as few-shot node classification.
We propose a new approach, the Task-Equivariant Graph few-shot learning (TEG) framework.
Our TEG framework enables the model to learn transferable task-adaptation strategies using a limited number of training meta-tasks.
arXiv Detail & Related papers (2023-05-30T05:47:28Z) - Graph Few-shot Learning with Task-specific Structures [38.52226241144403]
Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs)
We propose a novel framework that learns a task-specific structure for each meta-task.
In this way, we can learn node representations with the task-specific structure tailored for each meta-task.
arXiv Detail & Related papers (2022-10-21T17:40:21Z) - Reinforced Continual Learning for Graphs [18.64268861430314]
This paper proposes a graph continual learning strategy that combines the architecture-based and memory-based approaches.
It is numerically validated with several graph continual learning benchmark problems in both task-incremental learning and class-incremental learning settings.
arXiv Detail & Related papers (2022-09-04T07:49:59Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Graph Representation Learning for Multi-Task Settings: a Meta-Learning
Approach [5.629161809575013]
We propose a novel training strategy for graph representation learning, based on meta-learning.
Our method avoids the difficulties arising when learning to perform multiple tasks concurrently.
We show that the embeddings produced by a model trained with our method can be used to perform multiple tasks with comparable or, surprisingly, even higher performance than both single-task and multi-task end-to-end models.
arXiv Detail & Related papers (2022-01-10T12:58:46Z) - Graph-Based Neural Network Models with Multiple Self-Supervised
Auxiliary Tasks [79.28094304325116]
Graph Convolutional Networks are among the most promising approaches for capturing relationships among structured data points.
We propose three novel self-supervised auxiliary tasks to train graph-based neural network models in a multi-task fashion.
arXiv Detail & Related papers (2020-11-14T11:09:51Z) - HyperGrid: Efficient Multi-Task Transformers with Grid-wise Decomposable
Hyper Projections [96.64246471034195]
We propose textscHyperGrid, a new approach for highly effective multi-task learning.
Our method helps bridge the gap between fine-tuning and multi-task learning approaches.
arXiv Detail & Related papers (2020-07-12T02:49:16Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.