Self-Training and Multi-Task Learning for Limited Data: Evaluation Study
on Object Detection
- URL: http://arxiv.org/abs/2309.06288v1
- Date: Tue, 12 Sep 2023 14:50:14 GMT
- Title: Self-Training and Multi-Task Learning for Limited Data: Evaluation Study
on Object Detection
- Authors: Ho\`ang-\^An L\^e and Minh-Tan Pham
- Abstract summary: Experimental results show the improvement of performance when using a weak teacher with unseen data for training a multi-task student.
Despite the limited setup we believe the experimental results show the potential of multi-task knowledge distillation and self-training.
- Score: 4.9914667450658925
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Self-training allows a network to learn from the predictions of a more
complicated model, thus often requires well-trained teacher models and mixture
of teacher-student data while multi-task learning jointly optimizes different
targets to learn salient interrelationship and requires multi-task annotations
for each training example. These frameworks, despite being particularly data
demanding have potentials for data exploitation if such assumptions can be
relaxed. In this paper, we compare self-training object detection under the
deficiency of teacher training data where students are trained on unseen
examples by the teacher, and multi-task learning with partially annotated data,
i.e. single-task annotation per training example. Both scenarios have their own
limitation but potentially helpful with limited annotated data. Experimental
results show the improvement of performance when using a weak teacher with
unseen data for training a multi-task student. Despite the limited setup we
believe the experimental results show the potential of multi-task knowledge
distillation and self-training, which could be beneficial for future study.
Source code is at https://lhoangan.github.io/multas.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Exploring intra-task relations to improve meta-learning algorithms [1.223779595809275]
We aim to exploit external knowledge of task relations to improve training stability via effective mini-batching of tasks.
We hypothesize that selecting a diverse set of tasks in a mini-batch will lead to a better estimate of the full gradient and hence will lead to a reduction of noise in training.
arXiv Detail & Related papers (2023-12-27T15:33:52Z) - Data exploitation: multi-task learning of object detection and semantic
segmentation on partially annotated data [4.9914667450658925]
We study the joint learning of object detection and semantic segmentation, the two most popular vision problems.
We propose employing knowledge distillation to leverage joint-task optimization.
arXiv Detail & Related papers (2023-11-07T14:49:54Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Unified Demonstration Retriever for In-Context Learning [56.06473069923567]
Unified Demonstration Retriever (textbfUDR) is a single model to retrieve demonstrations for a wide range of tasks.
We propose a multi-task list-wise ranking training framework, with an iterative mining strategy to find high-quality candidates.
Experiments on 30+ tasks across 13 task families and multiple data domains show that UDR significantly outperforms baselines.
arXiv Detail & Related papers (2023-05-07T16:07:11Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Multi-Task Self-Training for Learning General Representations [97.01728635294879]
Multi-task self-training (MuST) harnesses the knowledge in independent specialized teacher models to train a single general student model.
MuST is scalable with unlabeled or partially labeled datasets and outperforms both specialized supervised models and self-supervised models when training on large scale datasets.
arXiv Detail & Related papers (2021-08-25T17:20:50Z) - Label-Efficient Multi-Task Segmentation using Contrastive Learning [0.966840768820136]
We propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models.
We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.
arXiv Detail & Related papers (2020-09-23T14:12:17Z) - Temporally Correlated Task Scheduling for Sequence Learning [143.70523777803723]
In many applications, a sequence learning task is usually associated with multiple temporally correlated auxiliary tasks.
We introduce a learnable scheduler to sequence learning, which can adaptively select auxiliary tasks for training.
Our method significantly improves the performance of simultaneous machine translation and stock trend forecasting.
arXiv Detail & Related papers (2020-07-10T10:28:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.