Learning Multiple Dense Prediction Tasks from Partially Annotated Data
- URL: http://arxiv.org/abs/2111.14893v1
- Date: Mon, 29 Nov 2021 19:03:12 GMT
- Title: Learning Multiple Dense Prediction Tasks from Partially Annotated Data
- Authors: Wei-Hong Li, Xialei Liu, Hakan Bilen
- Abstract summary: We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
- Score: 41.821234589075445
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the recent advances in multi-task learning of dense prediction
problems, most methods rely on expensive labelled datasets. In this paper, we
present a label efficient approach and look at jointly learning of multiple
dense prediction tasks on partially annotated data, which we call multi-task
partially-supervised learning. We propose a multi-task training procedure that
successfully leverages task relations to supervise its multi-task learning when
data is partially annotated. In particular, we learn to map each task pair to a
joint pairwise task-space which enables sharing information between them in a
computationally efficient way through another network conditioned on task
pairs, and avoids learning trivial cross-task relations by retaining high-level
information about the input image. We rigorously demonstrate that our proposed
method effectively exploits the images with unlabelled tasks and outperforms
existing semi-supervised learning approaches and related methods on three
standard benchmarks.
Related papers
- Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasets [2.1178416840822023]
Partial multi-task learning where training examples are annotated for one of the target tasks is a promising idea in remote sensing.
This paper proposes using knowledge distillation to replace the need of ground truths for the alternate task and enhance the performance of such approach.
arXiv Detail & Related papers (2024-05-24T09:48:50Z) - Joint-Task Regularization for Partially Labeled Multi-Task Learning [30.823282043129552]
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
We propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space.
arXiv Detail & Related papers (2024-04-02T14:16:59Z) - Data-CUBE: Data Curriculum for Instruction-based Sentence Representation
Learning [85.66907881270785]
We propose a data curriculum method, namely Data-CUBE, that arranges the orders of all the multi-task data for training.
In the task level, we aim to find the optimal task order to minimize the total cross-task interference risk.
In the instance level, we measure the difficulty of all instances per task, then divide them into the easy-to-difficult mini-batches for training.
arXiv Detail & Related papers (2024-01-07T18:12:20Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Data exploitation: multi-task learning of object detection and semantic
segmentation on partially annotated data [4.9914667450658925]
We study the joint learning of object detection and semantic segmentation, the two most popular vision problems.
We propose employing knowledge distillation to leverage joint-task optimization.
arXiv Detail & Related papers (2023-11-07T14:49:54Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Label-Efficient Multi-Task Segmentation using Contrastive Learning [0.966840768820136]
We propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models.
We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.
arXiv Detail & Related papers (2020-09-23T14:12:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.