Decoupling Weighing and Selecting for Integrating Multiple Graph
Pre-training Tasks
- URL: http://arxiv.org/abs/2403.01400v1
- Date: Sun, 3 Mar 2024 05:29:49 GMT
- Title: Decoupling Weighing and Selecting for Integrating Multiple Graph
Pre-training Tasks
- Authors: Tianyu Fan, Lirong Wu, Yufei Huang, Haitao Lin, Cheng Tan, Zhangyang
Gao, Stan Z. Li
- Abstract summary: This paper proposes a novel instance-level framework for integrating multiple graph pre-training tasks, Weigh And Select (WAS)
It first adaptively learns an optimal combination of tasks for each instance from a given task pool, based on which a customized instance-level task weighing strategy is learned.
Experiments on 16 graph datasets across node-level and graph-level downstream tasks have demonstrated that WAS can achieve comparable performance to other leading counterparts.
- Score: 58.65410800008769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the great success of graph pre-training for graph
representation learning. With hundreds of graph pre-training tasks proposed,
integrating knowledge acquired from multiple pre-training tasks has become a
popular research topic. In this paper, we identify two important collaborative
processes for this topic: (1) select: how to select an optimal task combination
from a given task pool based on their compatibility, and (2) weigh: how to
weigh the selected tasks based on their importance. While there currently has
been a lot of work focused on weighing, comparatively little effort has been
devoted to selecting. This paper proposes a novel instance-level framework for
integrating multiple graph pre-training tasks, Weigh And Select (WAS), where
the two collaborative processes, weighing and selecting, are combined by
decoupled siamese networks. Specifically, it first adaptively learns an optimal
combination of tasks for each instance from a given task pool, based on which a
customized instance-level task weighing strategy is learned. Extensive
experiments on 16 graph datasets across node-level and graph-level downstream
tasks have demonstrated that by combining a few simple but classical tasks, WAS
can achieve comparable performance to other leading counterparts. The code is
available at https://github.com/TianyuFan0504/WAS.
Related papers
- Dual-level Mixup for Graph Few-shot Learning with Fewer Tasks [23.07584018576066]
We propose a SiMple yet effectIve approach for graph few-shot Learning with fEwer tasks, named SMILE.
We introduce a dual-level mixup strategy, encompassing both within-task and across-task mixup, to simultaneously enrich the available nodes and tasks in meta-learning.
Empirically, SMILE consistently outperforms other competitive models by a large margin across all evaluated datasets with in-domain and cross-domain settings.
arXiv Detail & Related papers (2025-02-19T23:59:05Z) - Coreset-Based Task Selection for Sample-Efficient Meta-Reinforcement Learning [1.2952597101899859]
We study task selection to enhance sample efficiency in model-agnostic meta-reinforcement learning (MAML-RL)
We propose a coreset-based task selection approach that selects a weighted subset of tasks based on how diverse they are in gradient space.
We numerically validate this trend across multiple RL benchmark problems, illustrating the benefits of task selection beyond the LQR baseline.
arXiv Detail & Related papers (2025-02-04T14:09:00Z) - Towards Graph Foundation Models: Learning Generalities Across Graphs via Task-Trees [50.78679002846741]
We introduce a novel approach for learning cross-task generalities in graphs.
We propose task-trees as basic learning instances to align task spaces on graphs.
Our findings indicate that when a graph neural network is pretrained on diverse task-trees, it acquires transferable knowledge.
arXiv Detail & Related papers (2024-12-21T02:07:43Z) - FastGAS: Fast Graph-based Annotation Selection for In-Context Learning [53.17606395275021]
In-context learning (ICL) empowers large language models (LLMs) to tackle new tasks by using a series of training instances as prompts.
Existing methods have proposed to select a subset of unlabeled examples for annotation.
We propose a graph-based selection method, FastGAS, designed to efficiently identify high-quality instances.
arXiv Detail & Related papers (2024-06-06T04:05:54Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - FAITH: Few-Shot Graph Classification with Hierarchical Task Graphs [39.576675425158754]
Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class.
We propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph.
Experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-05T04:28:32Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Grad2Task: Improved Few-shot Text Classification Using Gradients for
Task Representation [24.488427641442694]
We propose a novel conditional neural process-based approach for few-shot text classification.
Our key idea is to represent each task using gradient information from a base model.
Our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches.
arXiv Detail & Related papers (2022-01-27T15:29:30Z) - Learned Weight Sharing for Deep Multi-Task Learning by Natural Evolution
Strategy and Stochastic Gradient Descent [0.0]
We propose an algorithm to learn the assignment between a shared set of weights and task-specific layers.
Learning takes place via a combination of natural evolution strategy and gradient descent.
The end result are task-specific networks that share weights but allow independent inference.
arXiv Detail & Related papers (2020-03-23T10:21:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.