Deep Multi-task Learning for Facial Expression Recognition and Synthesis
Based on Selective Feature Sharing
- URL: http://arxiv.org/abs/2007.04514v2
- Date: Sun, 28 Nov 2021 06:48:22 GMT
- Title: Deep Multi-task Learning for Facial Expression Recognition and Synthesis
Based on Selective Feature Sharing
- Authors: Rui Zhao, Tianshan Liu, Jun Xiao, Daniel P.K. Lun, Kin-Man Lam
- Abstract summary: We propose a novel selective feature-sharing method, and establish a multi-task network for facial expression recognition and facial expression synthesis.
The proposed method can effectively transfer beneficial features between different tasks, while filtering out useless and harmful information.
Experimental results show that the proposed method achieves state-of-the-art performance on those commonly used facial expression recognition benchmarks.
- Score: 28.178390846446938
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-task learning is an effective learning strategy for deep-learning-based
facial expression recognition tasks. However, most existing methods take into
limited consideration the feature selection, when transferring information
between different tasks, which may lead to task interference when training the
multi-task networks. To address this problem, we propose a novel selective
feature-sharing method, and establish a multi-task network for facial
expression recognition and facial expression synthesis. The proposed method can
effectively transfer beneficial features between different tasks, while
filtering out useless and harmful information. Moreover, we employ the facial
expression synthesis task to enlarge and balance the training dataset to
further enhance the generalization ability of the proposed method. Experimental
results show that the proposed method achieves state-of-the-art performance on
those commonly used facial expression recognition benchmarks, which makes it a
potential solution to real-world facial expression recognition problems.
Related papers
- Task-adaptive Q-Face [75.15668556061772]
We propose a novel task-adaptive multi-task face analysis method named as Q-Face.
Q-Face simultaneously performs multiple face analysis tasks with a unified model.
Our method achieves state-of-the-art performance on face expression recognition, action unit detection, face attribute analysis, age estimation, and face pose estimation.
arXiv Detail & Related papers (2024-05-15T03:13:11Z) - Cross-Task Multi-Branch Vision Transformer for Facial Expression and Mask Wearing Classification [13.995453649985732]
We propose a unified multi-branch vision transformer for facial expression recognition and mask wearing classification tasks.
Our approach extracts shared features for both tasks using a dual-branch architecture.
Our proposed framework reduces the overall complexity compared with using separate networks for both tasks.
arXiv Detail & Related papers (2024-04-22T22:02:19Z) - Revisiting Self-Supervised Contrastive Learning for Facial Expression
Recognition [39.647301516599505]
We revisit the use of self-supervised contrastive learning and explore three core strategies to enforce expression-specific representations.
Experimental results show that our proposed method outperforms the current state-of-the-art self-supervised learning methods.
arXiv Detail & Related papers (2022-10-08T00:04:27Z) - CIAO! A Contrastive Adaptation Mechanism for Non-Universal Facial
Expression Recognition [80.07590100872548]
We propose Contrastive Inhibitory Adaptati On (CIAO), a mechanism that adapts the last layer of facial encoders to depict specific affective characteristics on different datasets.
CIAO presents an improvement in facial expression recognition performance over six different datasets with very unique affective representations.
arXiv Detail & Related papers (2022-08-10T15:46:05Z) - Learning Multi-Task Transferable Rewards via Variational Inverse
Reinforcement Learning [10.782043595405831]
We extend an empowerment-based regularization technique to situations with multiple tasks based on the framework of a generative adversarial network.
Under the multitask environments with unknown dynamics, we focus on learning a reward and policy from unlabeled expert examples.
Our proposed method derives the variational lower bound of the situational mutual information to optimize it.
arXiv Detail & Related papers (2022-06-19T22:32:41Z) - Human-Centered Prior-Guided and Task-Dependent Multi-Task Representation
Learning for Action Recognition Pre-Training [8.571437792425417]
We propose a novel action recognition pre-training framework, which exploits human-centered prior knowledge that generates more informative representation.
Specifically, we distill knowledge from a human parsing model to enrich the semantic capability of representation.
In addition, we combine knowledge distillation with contrastive learning to constitute a task-dependent multi-task framework.
arXiv Detail & Related papers (2022-04-27T06:51:31Z) - Multi-Task Neural Processes [105.22406384964144]
We develop multi-task neural processes, a new variant of neural processes for multi-task learning.
In particular, we propose to explore transferable knowledge from related tasks in the function space to provide inductive bias for improving each individual task.
Results demonstrate the effectiveness of multi-task neural processes in transferring useful knowledge among tasks for multi-task learning.
arXiv Detail & Related papers (2021-11-10T17:27:46Z) - On the relationship between disentanglement and multi-task learning [62.997667081978825]
We take a closer look at the relationship between disentanglement and multi-task learning based on hard parameter sharing.
We show that disentanglement appears naturally during the process of multi-task neural network training.
arXiv Detail & Related papers (2021-10-07T14:35:34Z) - Pretext Tasks selection for multitask self-supervised speech
representation learning [23.39079406674442]
This paper introduces a method to select a group of pretext tasks among a set of candidates.
Experiments conducted on speaker recognition and automatic speech recognition validate our approach.
arXiv Detail & Related papers (2021-07-01T16:36:29Z) - Facial Emotion Recognition with Noisy Multi-task Annotations [88.42023952684052]
We introduce a new problem of facial emotion recognition with noisy multi-task annotations.
For this new problem, we suggest a formulation from the point of joint distribution match view.
We exploit a new method to enable the emotion prediction and the joint distribution learning.
arXiv Detail & Related papers (2020-10-19T20:39:37Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.