Context-Aware Multi-Task Learning for Traffic Scene Recognition in
Autonomous Vehicles
- URL: http://arxiv.org/abs/2004.01351v1
- Date: Fri, 3 Apr 2020 03:09:26 GMT
- Title: Context-Aware Multi-Task Learning for Traffic Scene Recognition in
Autonomous Vehicles
- Authors: Younkwan Lee, Jihyo Jeon, Jongmin Yu, Moongu Jeon
- Abstract summary: We propose an algorithm to jointly learn the task-specific and shared representations by adopting a multi-task learning network.
Experiments on the large-scale dataset HSD demonstrate the effectiveness and superiority of our network over state-of-the-art methods.
- Score: 10.475998113861895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Traffic scene recognition, which requires various visual classification
tasks, is a critical ingredient in autonomous vehicles. However, most existing
approaches treat each relevant task independently from one another, never
considering the entire system as a whole. Because of this, they are limited to
utilizing a task-specific set of features for all possible tasks of
inference-time, which ignores the capability to leverage common task-invariant
contextual knowledge for the task at hand. To address this problem, we propose
an algorithm to jointly learn the task-specific and shared representations by
adopting a multi-task learning network. Specifically, we present a lower bound
for the mutual information constraint between shared feature embedding and
input that is considered to be able to extract common contextual information
across tasks while preserving essential information of each task jointly. The
learned representations capture richer contextual information without
additional task-specific network. Extensive experiments on the large-scale
dataset HSD demonstrate the effectiveness and superiority of our network over
state-of-the-art methods.
Related papers
- Joint-Task Regularization for Partially Labeled Multi-Task Learning [30.823282043129552]
Multi-task learning has become increasingly popular in the machine learning field, but its practicality is hindered by the need for large, labeled datasets.
We propose Joint-Task Regularization (JTR), an intuitive technique which leverages cross-task relations to simultaneously regularize all tasks in a single joint-task latent space.
arXiv Detail & Related papers (2024-04-02T14:16:59Z) - Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Factorized Contrastive Learning: Going Beyond Multi-view Redundancy [116.25342513407173]
This paper proposes FactorCL, a new multimodal representation learning method to go beyond multi-view redundancy.
On large-scale real-world datasets, FactorCL captures both shared and unique information and achieves state-of-the-art results.
arXiv Detail & Related papers (2023-06-08T15:17:04Z) - Leveraging sparse and shared feature activations for disentangled
representation learning [112.22699167017471]
We propose to leverage knowledge extracted from a diversified set of supervised tasks to learn a common disentangled representation.
We validate our approach on six real world distribution shift benchmarks, and different data modalities.
arXiv Detail & Related papers (2023-04-17T01:33:24Z) - Active Multi-Task Representation Learning [50.13453053304159]
We give the first formal study on resource task sampling by leveraging the techniques from active learning.
We propose an algorithm that iteratively estimates the relevance of each source task to the target task and samples from each source task based on the estimated relevance.
arXiv Detail & Related papers (2022-02-02T08:23:24Z) - Learning Multi-Tasks with Inconsistent Labels by using Auxiliary Big
Task [24.618094251341958]
Multi-task learning is to improve the performance of the model by transferring and exploiting common knowledge among tasks.
We propose a framework to learn these tasks by jointly leveraging both abundant information from a learnt auxiliary big task with sufficiently many classes to cover those of all these tasks.
Our experimental results demonstrate its effectiveness in comparison with the state-of-the-art approaches.
arXiv Detail & Related papers (2022-01-07T02:46:47Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Multi-Task Reinforcement Learning with Context-based Representations [43.93866702838777]
We propose an efficient approach to knowledge transfer through the use of multiple context-dependent, composable representations across a family of tasks.
We use the proposed approach to obtain state-of-the-art results in Meta-World, a challenging multi-task benchmark consisting of 50 distinct robotic manipulation tasks.
arXiv Detail & Related papers (2021-02-11T18:41:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.