Data exploitation: multi-task learning of object detection and semantic
segmentation on partially annotated data
- URL: http://arxiv.org/abs/2311.04040v1
- Date: Tue, 7 Nov 2023 14:49:54 GMT
- Title: Data exploitation: multi-task learning of object detection and semantic
segmentation on partially annotated data
- Authors: Ho\`ang-\^An L\^e and Minh-Tan Pham
- Abstract summary: We study the joint learning of object detection and semantic segmentation, the two most popular vision problems.
We propose employing knowledge distillation to leverage joint-task optimization.
- Score: 4.9914667450658925
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-task partially annotated data where each data point is annotated for
only a single task are potentially helpful for data scarcity if a network can
leverage the inter-task relationship. In this paper, we study the joint
learning of object detection and semantic segmentation, the two most popular
vision problems, from multi-task data with partial annotations. Extensive
experiments are performed to evaluate each task performance and explore their
complementarity when a multi-task network cannot optimize both tasks
simultaneously. We propose employing knowledge distillation to leverage
joint-task optimization. The experimental results show favorable results for
multi-task learning and knowledge distillation over single-task learning and
even full supervision scenario. All code and data splits are available at
https://github.com/lhoangan/multas
Related papers
- Box for Mask and Mask for Box: weak losses for multi-task partially supervised learning [2.7719338074999538]
Making use of one task's information to train the other would be beneficial for multi-task partially supervised learning.
Box-for-Mask and Mask-for-Box strategies are proposed to distil necessary information from one task annotations to train the other.
arXiv Detail & Related papers (2024-11-26T15:51:25Z) - Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasets [2.1178416840822023]
Partial multi-task learning where training examples are annotated for one of the target tasks is a promising idea in remote sensing.
This paper proposes using knowledge distillation to replace the need of ground truths for the alternate task and enhance the performance of such approach.
arXiv Detail & Related papers (2024-05-24T09:48:50Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Self-Training and Multi-Task Learning for Limited Data: Evaluation Study
on Object Detection [4.9914667450658925]
Experimental results show the improvement of performance when using a weak teacher with unseen data for training a multi-task student.
Despite the limited setup we believe the experimental results show the potential of multi-task knowledge distillation and self-training.
arXiv Detail & Related papers (2023-09-12T14:50:14Z) - PartAL: Efficient Partial Active Learning in Multi-Task Visual Settings [57.08386016411536]
We show that it is more effective to select not only the images to be annotated but also a subset of tasks for which to provide annotations at each Active Learning (AL)
We demonstrate the effectiveness of our approach on several popular multi-task datasets.
arXiv Detail & Related papers (2022-11-21T15:08:35Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Semi-supervised Multi-task Learning for Semantics and Depth [88.77716991603252]
Multi-Task Learning (MTL) aims to enhance the model generalization by sharing representations between related tasks for better performance.
We propose the Semi-supervised Multi-Task Learning (MTL) method to leverage the available supervisory signals from different datasets.
We present a domain-aware discriminator structure with various alignment formulations to mitigate the domain discrepancy issue among datasets.
arXiv Detail & Related papers (2021-10-14T07:43:39Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning [82.62433731378455]
We show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales.
We propose a novel architecture, namely MTI-Net, that builds upon this finding.
arXiv Detail & Related papers (2020-01-19T21:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.