Tasks Structure Regularization in Multi-Task Learning for Improving
Facial Attribute Prediction
- URL: http://arxiv.org/abs/2108.04353v1
- Date: Thu, 29 Jul 2021 08:38:17 GMT
- Title: Tasks Structure Regularization in Multi-Task Learning for Improving
Facial Attribute Prediction
- Authors: Fariborz Taherkhani, Ali Dabouei, Sobhan Soleymani, Jeremy Dawson, and
Nasser M. Nasrabadi
- Abstract summary: We use a new Multi-Task Learning (MTL) paradigm in which a facial attribute predictor uses the knowledge of other related attributes to obtain a better generalization performance.
Our MTL methods are compared with competing methods for facial attribute prediction to show its effectiveness.
- Score: 27.508755548317712
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The great success of Convolutional Neural Networks (CNN) for facial attribute
prediction relies on a large amount of labeled images. Facial image datasets
are usually annotated by some commonly used attributes (e.g., gender), while
labels for the other attributes (e.g., big nose) are limited which causes their
prediction challenging. To address this problem, we use a new Multi-Task
Learning (MTL) paradigm in which a facial attribute predictor uses the
knowledge of other related attributes to obtain a better generalization
performance. Here, we leverage MLT paradigm in two problem settings. First, it
is assumed that the structure of the tasks (e.g., grouping pattern of facial
attributes) is known as a prior knowledge, and parameters of the tasks (i.e.,
predictors) within the same group are represented by a linear combination of a
limited number of underlying basis tasks. Here, a sparsity constraint on the
coefficients of this linear combination is also considered such that each task
is represented in a more structured and simpler manner. Second, it is assumed
that the structure of the tasks is unknown, and then structure and parameters
of the tasks are learned jointly by using a Laplacian regularization framework.
Our MTL methods are compared with competing methods for facial attribute
prediction to show its effectiveness.
Related papers
- Giving each task what it needs -- leveraging structured sparsity for tailored multi-task learning [4.462334751640166]
In the Multi-task Learning (MTL) framework, every task demands distinct feature representations, ranging from low-level to high-level attributes.
This work introduces Layer-d Multi-Task models that utilize structured sparsity to refine feature selection for individual tasks and enhance the performance of all tasks in a multi-task scenario.
arXiv Detail & Related papers (2024-06-05T08:23:38Z) - Hierarchical Visual Primitive Experts for Compositional Zero-Shot
Learning [52.506434446439776]
Compositional zero-shot learning (CZSL) aims to recognize compositions with prior knowledge of known primitives (attribute and object)
We propose a simple and scalable framework called Composition Transformer (CoT) to address these issues.
Our method achieves SoTA performance on several benchmarks, including MIT-States, C-GQA, and VAW-CZSL.
arXiv Detail & Related papers (2023-08-08T03:24:21Z) - Distribution Matching for Heterogeneous Multi-Task Learning: a
Large-scale Face Study [75.42182503265056]
Multi-Task Learning has emerged as a methodology in which multiple tasks are jointly learned by a shared learning algorithm.
We deal with heterogeneous MTL, simultaneously addressing detection, classification & regression problems.
We build FaceBehaviorNet, the first framework for large-scale face analysis, by jointly learning all facial behavior tasks.
arXiv Detail & Related papers (2021-05-08T22:26:52Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Multi-Task Learning for Dense Prediction Tasks: A Survey [87.66280582034838]
Multi-task learning (MTL) techniques have shown promising results w.r.t. performance, computations and/or memory footprint.
We provide a well-rounded view on state-of-the-art deep learning approaches for MTL in computer vision.
arXiv Detail & Related papers (2020-04-28T09:15:50Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Deep Multi-task Multi-label CNN for Effective Facial Attribute
Classification [53.58763562421771]
We propose a novel deep multi-task multi-label CNN, termed DMM-CNN, for effective Facial Attribute Classification (FAC)
Specifically, DMM-CNN jointly optimize two closely-related tasks (i.e., facial landmark detection and FAC) to improve the performance of FAC by taking advantage of multi-task learning.
Two different network architectures are respectively designed to extract features for two groups of attributes, and a novel dynamic weighting scheme is proposed to automatically assign the loss weight to each facial attribute during training.
arXiv Detail & Related papers (2020-02-10T12:34:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.