Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task
- URL: http://arxiv.org/abs/2201.02305v1
- Date: Fri, 7 Jan 2022 02:46:47 GMT
- Title: Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task
- Authors: Quan Feng, Songcan Chen
- Abstract summary: Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
- Score: 24.618094251341958
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning is to improve the performance of the model by
transferring and exploiting common knowledge among tasks. Existing MTL works
mainly focus on the scenario where label sets among multiple tasks (MTs) are
usually the same, thus they can be utilized for learning across the tasks.
While almost rare works explore the scenario where each task only has a small
amount of training samples, and their label sets are just partially overlapped
or even not. Learning such MTs is more challenging because of less correlation
information available among these tasks. For this, we propose a framework to
learn these tasks by jointly leveraging both abundant information from a learnt
auxiliary big task with sufficiently many classes to cover those of all these
tasks and the information shared among those partially-overlapped tasks. In our
implementation of using the same neural network architecture of the learnt
auxiliary task to learn individual tasks, the key idea is to utilize available
label information to adaptively prune the hidden layer neurons of the auxiliary
network to construct corresponding network for each task, while accompanying a
joint learning across individual tasks. Our experimental results demonstrate
its effectiveness in comparison with the state-of-the-art approaches.
Related papers
- Joint-Task Regularization for Partially Labeled Multi-Task Learning [30.823282043129552]
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
We propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space.
arXiv Detail & Related papers (2024-04-02T14:16:59Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - PartAL: Efficient Partial Active Learning in Multi-Task Visual Settings [57.08386016411536]
We show that it is more effective to select not only the images to be annotated but also a subset of tasks for which to provide annotations at each Active Learning (AL)
We demonstrate the effectiveness of our approach on several popular multi-task datasets.
arXiv Detail & Related papers (2022-11-21T15:08:35Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Modular Adaptive Policy Selection for Multi-Task Imitation Learning
through Task Division [60.232542918414985]
Multi-task learning often suffers from negative transfer, sharing information that should be task-specific.
This is done by using proto-policies as modules to divide the tasks into simple sub-behaviours that can be shared.
We also demonstrate its ability to autonomously divide the tasks into both shared and task-specific sub-behaviours.
arXiv Detail & Related papers (2022-03-28T15:53:17Z) - Efficiently Identifying Task Groupings for Multi-Task Learning [55.80489920205404]
Multi-task learning can leverage information learned by one task to benefit the training of other tasks.
We suggest an approach to select which tasks should train together in multi-task learning models.
Our method determines task groupings in a single training run by co-training all tasks together and quantifying the effect to which one task's gradient would affect another task's loss.
arXiv Detail & Related papers (2021-09-10T02:01:43Z) - Multi-Task Reinforcement Learning with Context-based Representations [43.93866702838777]
We propose an efficient approach to knowledge transfer through the use of multiple context-dependent, composable representations across a family of tasks.
We use the proposed approach to obtain state-of-the-art results in Meta-World, a challenging multi-task benchmark consisting of 50 distinct robotic manipulation tasks.
arXiv Detail & Related papers (2021-02-11T18:41:27Z) - Context-Aware Multi-Task Learning for Traffic Scene Recognition in
Autonomous Vehicles [10.475998113861895]
We propose an algorithm to jointly learn the task-specific and shared representations by adopting a multi-task learning network.
Experiments on the large-scale dataset HSD demonstrate the effectiveness and superiority of our network over state-of-the-art methods.
arXiv Detail & Related papers (2020-04-03T03:09:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.