Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study
- URL: http://arxiv.org/abs/2105.03790v1
- Date: Sat, 8 May 2021 22:26:52 GMT
- Title: Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study
- Authors: Dimitrios Kollias and Viktoriia Sharmanska and Stefanos Zafeiriou
- Abstract summary: Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
- Score: 75.42182503265056
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Multi-Task Learning has emerged as a methodology in which multiple tasks are
jointly learned by a shared learning algorithm, such as a DNN. MTL is based on
the assumption that the tasks under consideration are related; therefore it
exploits shared knowledge for improving performance on each individual task.
Tasks are generally considered to be homogeneous, i.e., to refer to the same
type of problem. Moreover, MTL is usually based on ground truth annotations
with full, or partial overlap across tasks. In this work, we deal with
heterogeneous MTL, simultaneously addressing detection, classification &
regression problems. We explore task-relatedness as a means for co-training, in
a weakly-supervised way, tasks that contain little, or even non-overlapping
annotations. Task-relatedness is introduced in MTL, either explicitly through
prior expert knowledge, or through data-driven studies. We propose a novel
distribution matching approach, in which knowledge exchange is enabled between
tasks, via matching of their predictions' distributions. Based on this
approach, we build FaceBehaviorNet, the first framework for large-scale face
analysis, by jointly learning all facial behavior tasks. We develop case
studies for: i) continuous affect estimation, action unit detection, basic
emotion recognition; ii) attribute detection, face identification.
We illustrate that co-training via task relatedness alleviates negative
transfer. Since FaceBehaviorNet learns features that encapsulate all aspects of
facial behavior, we conduct zero-/few-shot learning to perform tasks beyond the
ones that it has been trained for, such as compound emotion recognition. By
conducting a very large experimental study, utilizing 10 databases, we
illustrate that our approach outperforms, by large margins, the
state-of-the-art in all tasks and in all databases, even in these which have
not been used in its training.
Related papers
- Distribution Matching for Multi-Task Learning of Classification Tasks: a
Large-Scale Study on Faces & Beyond [62.406687088097605]
Multi-Task Learning (MTL) is a framework, where multiple related tasks are learned jointly and benefit from a shared representation space.
We show that MTL can be successful with classification tasks with little, or non-overlapping annotations.
We propose a novel approach, where knowledge exchange is enabled between the tasks via distribution matching.
arXiv Detail & Related papers (2024-01-02T14:18:11Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Learning Multiple Dense Prediction Tasks from Partially Annotated Data [41.821234589075445]
We look at jointly learning of multiple dense prediction tasks on partially annotated data, which we call multi-task partially-supervised learning.
We propose a multi-task training procedure that successfully leverages task relations to supervise its multi-task learning when data is partially annotated.
We rigorously demonstrate that our proposed method effectively exploits the images with unlabelled tasks and outperforms existing semi-supervised learning approaches and related methods on three standard benchmarks.
arXiv Detail & Related papers (2021-11-29T19:03:12Z) - Variational Multi-Task Learning with Gumbel-Softmax Priors [105.22406384964144]
Multi-task learning aims to explore task relatedness to improve individual tasks.
We propose variational multi-task learning (VMTL), a general probabilistic inference framework for learning multiple related tasks.
arXiv Detail & Related papers (2021-11-09T18:49:45Z) - Meta-Learning with Fewer Tasks through Task Interpolation [67.03769747726666]
Current meta-learning algorithms require a large number of meta-training tasks, which may not be accessible in real-world scenarios.
By meta-learning with task gradient (MLTI), our approach effectively generates additional tasks by randomly sampling a pair of tasks and interpolating the corresponding features and labels.
Empirically, in our experiments on eight datasets from diverse domains, we find that the proposed general MLTI framework is compatible with representative meta-learning algorithms and consistently outperforms other state-of-the-art strategies.
arXiv Detail & Related papers (2021-06-04T20:15:34Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.