Multi-Task Consistency for Active Learning
- URL: http://arxiv.org/abs/2306.12398v1
- Date: Wed, 21 Jun 2023 17:34:31 GMT
- Title: Multi-Task Consistency for Active Learning
- Authors: Aral Hekimoglu, Philipp Friedrich, Walter Zimmer, Michael Schmidt,
Alvaro Marcos-Ramiro, Alois C. Knoll
- Abstract summary: Inconsistency-based active learning has proven to be effective in selecting informative samples for annotation.
We propose a novel multi-task active learning strategy for two coupled vision tasks: object detection and semantic segmentation.
Our approach achieves 95% of the fully-trained performance using only 67% of the available data.
- Score: 18.794331424921946
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Learning-based solutions for vision tasks require a large amount of labeled
training data to ensure their performance and reliability. In single-task
vision-based settings, inconsistency-based active learning has proven to be
effective in selecting informative samples for annotation. However, there is a
lack of research exploiting the inconsistency between multiple tasks in
multi-task networks. To address this gap, we propose a novel multi-task active
learning strategy for two coupled vision tasks: object detection and semantic
segmentation. Our approach leverages the inconsistency between them to identify
informative samples across both tasks. We propose three constraints that
specify how the tasks are coupled and introduce a method for determining the
pixels belonging to the object detected by a bounding box, to later quantify
the constraints as inconsistency scores. To evaluate the effectiveness of our
approach, we establish multiple baselines for multi-task active learning and
introduce a new metric, mean Detection Segmentation Quality (mDSQ), tailored
for the multi-task active learning comparison that addresses the performance of
both tasks. We conduct extensive experiments on the nuImages and A9 datasets,
demonstrating that our approach outperforms existing state-of-the-art methods
by up to 3.4% mDSQ on nuImages. Our approach achieves 95% of the fully-trained
performance using only 67% of the available data, corresponding to 20% fewer
labels compared to random selection and 5% fewer labels compared to
state-of-the-art selection strategy. Our code will be made publicly available
after the review process.
Related papers
- Leveraging knowledge distillation for partial multi-task learning from multiple remote sensing datasets [2.1178416840822023]
Partial multi-task learning where training examples are annotated for one of the target tasks is a promising idea in remote sensing.
This paper proposes using knowledge distillation to replace the need of ground truths for the alternate task and enhance the performance of such approach.
arXiv Detail & Related papers (2024-05-24T09:48:50Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Visual Exemplar Driven Task-Prompting for Unified Perception in
Autonomous Driving [100.3848723827869]
We present an effective multi-task framework, VE-Prompt, which introduces visual exemplars via task-specific prompting.
Specifically, we generate visual exemplars based on bounding boxes and color-based markers, which provide accurate visual appearances of target categories.
We bridge transformer-based encoders and convolutional layers for efficient and accurate unified perception in autonomous driving.
arXiv Detail & Related papers (2023-03-03T08:54:06Z) - On Steering Multi-Annotations per Sample for Multi-Task Learning [79.98259057711044]
The study of multi-task learning has drawn great attention from the community.
Despite the remarkable progress, the challenge of optimally learning different tasks simultaneously remains to be explored.
Previous works attempt to modify the gradients from different tasks. Yet these methods give a subjective assumption of the relationship between tasks, and the modified gradient may be less accurate.
In this paper, we introduce Task Allocation(STA), a mechanism that addresses this issue by a task allocation approach, in which each sample is randomly allocated a subset of tasks.
For further progress, we propose Interleaved Task Allocation(ISTA) to iteratively allocate all
arXiv Detail & Related papers (2022-03-06T11:57:18Z) - Activation to Saliency: Forming High-Quality Labels for Unsupervised
Salient Object Detection [54.92703325989853]
We propose a two-stage Activation-to-Saliency (A2S) framework that effectively generates high-quality saliency cues.
No human annotations are involved in our framework during the whole training process.
Our framework reports significant performance compared with existing USOD methods.
arXiv Detail & Related papers (2021-12-07T11:54:06Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Distribution Alignment: A Unified Framework for Long-tail Visual
Recognition [52.36728157779307]
We propose a unified distribution alignment strategy for long-tail visual recognition.
We then introduce a generalized re-weight method in the two-stage learning to balance the class prior.
Our approach achieves the state-of-the-art results across all four recognition tasks with a simple and unified framework.
arXiv Detail & Related papers (2021-03-30T14:09:53Z) - Multi-task Learning by Leveraging the Semantic Information [14.397128692867799]
We propose to leverage the label information in multi-task learning by exploring the semantic conditional relations among tasks.
Our analysis also leads to a concrete algorithm that jointly matches the semantic distribution and controls label distribution divergence.
arXiv Detail & Related papers (2021-03-03T17:36:35Z) - Label-Efficient Multi-Task Segmentation using Contrastive Learning [0.966840768820136]
We propose a multi-task segmentation model with a contrastive learning based subtask and compare its performance with other multi-task models.
We experimentally show that our proposed method outperforms other multi-task methods including the state-of-the-art fully supervised model when the amount of annotated data is limited.
arXiv Detail & Related papers (2020-09-23T14:12:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.