Decoupling Weighing and Selecting for Integrating Multiple Graph
Pre-training Tasks
- URL: http://arxiv.org/abs/2403.01400v1
- Date: Sun, 3 Mar 2024 05:29:49 GMT
- Title: Decoupling Weighing and Selecting for Integrating Multiple Graph
Pre-training Tasks
- Authors: Tianyu Fan, Lirong Wu, Yufei Huang, Haitao Lin, Cheng Tan, Zhangyang
Gao, Stan Z. Li
- Abstract summary: This paper proposes a novel instance-level framework for integrating multiple graph pre-training tasks, Weigh And Select (WAS)
It first adaptively learns an optimal combination of tasks for each instance from a given task pool, based on which a customized instance-level task weighing strategy is learned.
Experiments on 16 graph datasets across node-level and graph-level downstream tasks have demonstrated that WAS can achieve comparable performance to other leading counterparts.
- Score: 58.65410800008769
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent years have witnessed the great success of graph pre-training for graph
representation learning. With hundreds of graph pre-training tasks proposed,
integrating knowledge acquired from multiple pre-training tasks has become a
popular research topic. In this paper, we identify two important collaborative
processes for this topic: (1) select: how to select an optimal task combination
from a given task pool based on their compatibility, and (2) weigh: how to
weigh the selected tasks based on their importance. While there currently has
been a lot of work focused on weighing, comparatively little effort has been
devoted to selecting. This paper proposes a novel instance-level framework for
integrating multiple graph pre-training tasks, Weigh And Select (WAS), where
the two collaborative processes, weighing and selecting, are combined by
decoupled siamese networks. Specifically, it first adaptively learns an optimal
combination of tasks for each instance from a given task pool, based on which a
customized instance-level task weighing strategy is learned. Extensive
experiments on 16 graph datasets across node-level and graph-level downstream
tasks have demonstrated that by combining a few simple but classical tasks, WAS
can achieve comparable performance to other leading counterparts. The code is
available at https://github.com/TianyuFan0504/WAS.
Related papers
- FastGAS: Fast Graph-based Annotation Selection for In-Context Learning [53.17606395275021]
In-context learning (ICL) empowers large language models (LLMs) to tackle new tasks by using a series of training instances as prompts.
Existing methods have proposed to select a subset of unlabeled examples for annotation.
We propose a graph-based selection method, FastGAS, designed to efficiently identify high-quality instances.
arXiv Detail & Related papers (2024-06-06T04:05:54Z) - GistScore: Learning Better Representations for In-Context Example
Selection with Gist Bottlenecks [3.9638110494107095]
In-context Learning (ICL) is the ability of Large Language Models (LLMs) to perform new tasks when conditioned on prompts.
We propose Example Gisting, a novel approach for training example encoders through supervised fine-tuning.
We show that our fine-tuned models get state-of-the-art ICL performance with over 20% absolute gain over off-the-shelf retrievers.
arXiv Detail & Related papers (2023-11-16T06:28:05Z) - Boosting Multitask Learning on Graphs through Higher-Order Task Affinities [17.70434437597516]
Predicting node labels on a given graph is a widely studied problem with many applications, including community detection and molecular graph prediction.
This paper considers predicting multiple node labeling functions on graphs simultaneously and revisits this problem from a multitask learning perspective.
We develop an algorithm to cluster tasks into groups based on a higher-order task affinity measure.
arXiv Detail & Related papers (2023-06-24T15:53:38Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Association Graph Learning for Multi-Task Classification with Category
Shifts [68.58829338426712]
We focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously.
We learn an association graph to transfer knowledge among tasks for missing classes.
Our method consistently performs better than representative baselines.
arXiv Detail & Related papers (2022-10-10T12:37:41Z) - FAITH: Few-Shot Graph Classification with Hierarchical Task Graphs [39.576675425158754]
Few-shot graph classification aims at predicting classes for graphs, given limited labeled graphs for each class.
We propose a novel few-shot learning framework FAITH that captures task correlations via constructing a hierarchical task graph.
Experiments on four prevalent few-shot graph classification datasets demonstrate the superiority of FAITH over other state-of-the-art baselines.
arXiv Detail & Related papers (2022-05-05T04:28:32Z) - Arch-Graph: Acyclic Architecture Relation Predictor for
Task-Transferable Neural Architecture Search [96.31315520244605]
Arch-Graph is a transferable NAS method that predicts task-specific optimal architectures.
We show Arch-Graph's transferability and high sample efficiency across numerous tasks.
It is able to find top 0.16% and 0.29% architectures on average on two search spaces under the budget of only 50 models.
arXiv Detail & Related papers (2022-04-12T16:46:06Z) - Task Adaptive Parameter Sharing for Multi-Task Learning [114.80350786535952]
Adaptive Task Adapting Sharing (TAPS) is a method for tuning a base model to a new task by adaptively modifying a small, task-specific subset of layers.
Compared to other methods, TAPS retains high accuracy on downstream tasks while introducing few task-specific parameters.
We evaluate our method on a suite of fine-tuning tasks and architectures (ResNet, DenseNet, ViT) and show that it achieves state-of-the-art performance while being simple to implement.
arXiv Detail & Related papers (2022-03-30T23:16:07Z) - Grad2Task: Improved Few-shot Text Classification Using Gradients for
Task Representation [24.488427641442694]
We propose a novel conditional neural process-based approach for few-shot text classification.
Our key idea is to represent each task using gradient information from a base model.
Our approach outperforms traditional fine-tuning, sequential transfer learning, and state-of-the-art meta learning approaches.
arXiv Detail & Related papers (2022-01-27T15:29:30Z) - Learned Weight Sharing for Deep Multi-Task Learning by Natural Evolution
Strategy and Stochastic Gradient Descent [0.0]
We propose an algorithm to learn the assignment between a shared set of weights and task-specific layers.
Learning takes place via a combination of natural evolution strategy and gradient descent.
The end result are task-specific networks that share weights but allow independent inference.
arXiv Detail & Related papers (2020-03-23T10:21:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.