Relational Multi-Task Learning: Modeling Relations between Data and
Tasks
- URL: http://arxiv.org/abs/2303.07666v1
- Date: Tue, 14 Mar 2023 07:15:41 GMT
- Title: Relational Multi-Task Learning: Modeling Relations between Data and
Tasks
- Authors: Kaidi Cao, Jiaxuan You, Jure Leskovec
- Abstract summary: Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
- Score: 84.41620970886483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A key assumption in multi-task learning is that at the inference time the
multi-task model only has access to a given data point but not to the data
point's labels from other tasks. This presents an opportunity to extend
multi-task learning to utilize data point's labels from other auxiliary tasks,
and this way improves performance on the new task. Here we introduce a novel
relational multi-task learning setting where we leverage data point labels from
auxiliary tasks to make more accurate predictions on the new task. We develop
MetaLink, where our key innovation is to build a knowledge graph that connects
data points and tasks and thus allows us to leverage labels from auxiliary
tasks. The knowledge graph consists of two types of nodes: (1) data nodes,
where node features are data embeddings computed by the neural network, and (2)
task nodes, with the last layer's weights for each task as node features. The
edges in this knowledge graph capture data-task relationships, and the edge
label captures the label of a data point on a particular task. Under MetaLink,
we reformulate the new task as a link label prediction problem between a data
node and a task node. The MetaLink framework provides flexibility to model
knowledge transfer from auxiliary task labels to the task of interest. We
evaluate MetaLink on 6 benchmark datasets in both biochemical and vision
domains. Experiments demonstrate that MetaLink can successfully utilize the
relations among different tasks, outperforming the state-of-the-art methods
under the proposed relational multi-task learning setting, with up to 27%
improvement in ROC AUC.
Related papers
- Decoupling Weighing and Selecting for Integrating Multiple Graph
Pre-training Tasks [58.65410800008769]
This paper proposes a novel instance-level framework for integrating multiple graph pre-training tasks, Weigh And Select (WAS)
It first adaptively learns an optimal combination of tasks for each instance from a given task pool, based on which a customized instance-level task weighing strategy is learned.
Experiments on 16 graph datasets across node-level and graph-level downstream tasks have demonstrated that WAS can achieve comparable performance to other leading counterparts.
arXiv Detail & Related papers (2024-03-03T05:29:49Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Graph Few-shot Learning with Task-specific Structures [38.52226241144403]
Existing graph few-shot learning methods typically leverage Graph Neural Networks (GNNs)
We propose a novel framework that learns a task-specific structure for each meta-task.
In this way, we can learn node representations with the task-specific structure tailored for each meta-task.
arXiv Detail & Related papers (2022-10-21T17:40:21Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Association Graph Learning for Multi-Task Classification with Category
Shifts [68.58829338426712]
We focus on multi-task classification, where related classification tasks share the same label space and are learned simultaneously.
We learn an association graph to transfer knowledge among tasks for missing classes.
Our method consistently performs better than representative baselines.
arXiv Detail & Related papers (2022-10-10T12:37:41Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z) - Deep Multi-Task Augmented Feature Learning via Hierarchical Graph Neural
Network [4.121467410954028]
We propose a Hierarchical Graph Neural Network to learn augmented features for deep multi-task learning.
Experiments on real-world datastes show the significant performance improvement when using this strategy.
arXiv Detail & Related papers (2020-02-12T06:02:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.