Task Grouping for Automated Multi-Task Machine Learning via Task
Affinity Prediction
- URL: http://arxiv.org/abs/2310.16241v1
- Date: Tue, 24 Oct 2023 23:29:46 GMT
- Title: Task Grouping for Automated Multi-Task Machine Learning via Task
Affinity Prediction
- Authors: Afiya Ayman, Ayan Mukhopadhyay, Aron Laszka
- Abstract summary: Multi-task learning (MTL) models can attain significantly higher accuracy than single-task learning (STL) models.
In this paper, we propose a novel automated approach for task grouping.
We identify inherent task features and STL characteristics that can help us to predict whether a group of tasks should be learned together using MTL or if they should be learned independently using STL.
- Score: 7.975047833725489
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: When a number of similar tasks have to be learned simultaneously, multi-task
learning (MTL) models can attain significantly higher accuracy than single-task
learning (STL) models. However, the advantage of MTL depends on various
factors, such as the similarity of the tasks, the sizes of the datasets, and so
on; in fact, some tasks might not benefit from MTL and may even incur a loss of
accuracy compared to STL. Hence, the question arises: which tasks should be
learned together? Domain experts can attempt to group tasks together following
intuition, experience, and best practices, but manual grouping can be
labor-intensive and far from optimal. In this paper, we propose a novel
automated approach for task grouping. First, we study the affinity of tasks for
MTL using four benchmark datasets that have been used extensively in the MTL
literature, focusing on neural network-based MTL models. We identify inherent
task features and STL characteristics that can help us to predict whether a
group of tasks should be learned together using MTL or if they should be
learned independently using STL. Building on this predictor, we introduce a
randomized search algorithm, which employs the predictor to minimize the number
of MTL trainings performed during the search for task groups. We demonstrate on
the four benchmark datasets that our predictor-driven search approach can find
better task groupings than existing baseline approaches.
Related papers
- SELF-GUIDE: Better Task-Specific Instruction Following via Self-Synthetic Finetuning [70.21358720599821]
Large language models (LLMs) hold the promise of solving diverse tasks when provided with appropriate natural language prompts.
We propose SELF-GUIDE, a multi-stage mechanism in which we synthesize task-specific input-output pairs from the student LLM.
We report an absolute improvement of approximately 15% for classification tasks and 18% for generation tasks in the benchmark's metrics.
arXiv Detail & Related papers (2024-07-16T04:41:58Z) - Multi-task learning via robust regularized clustering with non-convex group penalties [0.0]
Multi-task learning (MTL) aims to improve estimation performance by sharing common information among related tasks.
Existing MTL methods based on this assumption often ignore outlier tasks.
We propose a novel MTL method called MultiTask Regularized Clustering (MTLRRC)
arXiv Detail & Related papers (2024-04-04T07:09:43Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Multitask Learning Can Improve Worst-Group Outcomes [76.92646345152788]
Multitask learning (MTL) is one such widely used technique.
We propose to modify standard MTL by regularizing the joint multitask representation space.
We find that our regularized MTL approach emphconsistently outperforms JTT on both average and worst-group outcomes.
arXiv Detail & Related papers (2023-12-05T21:38:24Z) - When Multi-Task Learning Meets Partial Supervision: A Computer Vision Review [7.776434991976473]
Multi-Task Learning (MTL) aims to learn multiple tasks simultaneously while exploiting their mutual relationships.
This review focuses on how MTL could be utilised under different partial supervision settings to address these challenges.
arXiv Detail & Related papers (2023-07-25T20:08:41Z) - When Does Aggregating Multiple Skills with Multi-Task Learning Work? A
Case Study in Financial NLP [22.6364117325639]
Multi-task learning (MTL) aims at achieving a better model by leveraging data and knowledge from multiple tasks.
Our findings suggest that the key to MTL success lies in skill diversity, relatedness between tasks, and choice of aggregation size and shared capacity.
arXiv Detail & Related papers (2023-05-23T12:37:14Z) - "It's a Match!" -- A Benchmark of Task Affinity Scores for Joint
Learning [74.14961250042629]
Multi-Task Learning (MTL) promises attractive, characterizing the conditions of its success is still an open problem in Deep Learning.
Estimateing task affinity for joint learning is a key endeavor.
Recent work suggests that the training conditions themselves have a significant impact on the outcomes of MTL.
Yet, the literature is lacking a benchmark to assess the effectiveness of tasks affinity estimation techniques.
arXiv Detail & Related papers (2023-01-07T15:16:35Z) - When to Use Multi-Task Learning vs Intermediate Fine-Tuning for
Pre-Trained Encoder Transfer Learning [15.39115079099451]
Transfer learning (TL) in natural language processing has seen a surge of interest in recent years.
Three main strategies have emerged for making use of multiple supervised datasets during fine-tuning.
We compare all three TL methods in a comprehensive analysis on the GLUE dataset suite.
arXiv Detail & Related papers (2022-05-17T06:48:45Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Latent Group Structured Multi-task Learning [2.827177139912107]
In multi-task learning (MTL), we improve the performance of key machine learning algorithms by training various tasks jointly.
We present our group structured latent-space multi-task learning model, which encourages group structured tasks defined by prior information.
Experiments are conducted on both synthetic and real-world datasets, showing competitive performance over single-task learning.
arXiv Detail & Related papers (2020-11-24T05:38:58Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.