Multitask Emotion Recognition with Incomplete Labels
- URL: http://arxiv.org/abs/2002.03557v2
- Date: Tue, 10 Mar 2020 11:52:37 GMT
- Title: Multitask Emotion Recognition with Incomplete Labels
- Authors: Didan Deng, Zhaokang Chen, Bertram E. Shi
- Abstract summary: We train a unified model to perform three tasks: facial action unit detection, expression classification, and valence-arousal estimation.
Most existing datasets do not contain labels for all three tasks.
We find that most of the student models outperform their teacher model on all the three tasks.
- Score: 7.811459544911892
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We train a unified model to perform three tasks: facial action unit
detection, expression classification, and valence-arousal estimation. We
address two main challenges of learning the three tasks. First, most existing
datasets are highly imbalanced. Second, most existing datasets do not contain
labels for all three tasks. To tackle the first challenge, we apply data
balancing techniques to experimental datasets. To tackle the second challenge,
we propose an algorithm for the multitask model to learn from missing
(incomplete) labels. This algorithm has two steps. We first train a teacher
model to perform all three tasks, where each instance is trained by the ground
truth label of its corresponding task. Secondly, we refer to the outputs of the
teacher model as the soft labels. We use the soft labels and the ground truth
to train the student model. We find that most of the student models outperform
their teacher model on all the three tasks. Finally, we use model ensembling to
boost performance further on the three tasks.
Related papers
- Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Multitask Emotion Recognition Model with Knowledge Distillation and Task
Discriminator [0.0]
We designed a multi-task model using ABAW dataset to predict emotions.
We trained model from the incomplete label by applying the knowledge distillation technique.
As a result we achieved 2.40 in Multi Task Learning task validation dataset.
arXiv Detail & Related papers (2022-03-24T13:50:48Z) - X-Learner: Learning Cross Sources and Tasks for Universal Visual
Representation [71.51719469058666]
We propose a representation learning framework called X-Learner.
X-Learner learns the universal feature of multiple vision tasks supervised by various sources.
X-Learner achieves strong performance on different tasks without extra annotations, modalities and computational costs.
arXiv Detail & Related papers (2022-03-16T17:23:26Z) - Multi-Task Self-Training for Learning General Representations [97.01728635294879]
Multi-task self-training (MuST) harnesses the knowledge in independent specialized teacher models to train a single general student model.
MuST is scalable with unlabeled or partially labeled datasets and outperforms both specialized supervised models and self-supervised models when training on large scale datasets.
arXiv Detail & Related papers (2021-08-25T17:20:50Z) - Multitask Multi-database Emotion Recognition [1.52292571922932]
We introduce our submission to the 2nd Affective Behavior Analysis in-the-wild (ABAW) 2021 competition.
We train a unified deep learning model on multi- databases to perform two tasks.
Experiment results show that the network have achieved promising results on the validation set of the AffWild2 database.
arXiv Detail & Related papers (2021-07-08T21:57:58Z) - Feature Pyramid Network for Multi-task Affective Analysis [15.645791213312734]
We propose a novel model named feature pyramid networks for multi-task affect analysis.
The hierarchical features are extracted to predict three labels and we apply teacher-student training strategy to learn from pretrained single-task models.
arXiv Detail & Related papers (2021-07-08T08:10:04Z) - Boosting a Model Zoo for Multi-Task and Continual Learning [15.110807414130923]
"Model Zoo" is an algorithm that builds an ensemble of models, each of which is very small, and it is trained on a smaller set of tasks.
Model Zoo achieves large gains in prediction accuracy compared to state-of-the-art methods in multi-task and continual learning.
arXiv Detail & Related papers (2021-06-06T04:25:09Z) - Dealing with Missing Modalities in the Visual Question Answer-Difference
Prediction Task through Knowledge Distillation [75.1682163844354]
We address the issues of missing modalities that have arisen from the Visual Question Answer-Difference prediction task.
We introduce a model, the "Big" Teacher, that takes the image/question/answer triplet as its input and outperforms the baseline.
arXiv Detail & Related papers (2021-04-13T06:41:11Z) - KGPT: Knowledge-Grounded Pre-Training for Data-to-Text Generation [100.79870384880333]
We propose a knowledge-grounded pre-training (KGPT) to generate knowledge-enriched text.
We adopt three settings, namely fully-supervised, zero-shot, few-shot to evaluate its effectiveness.
Under zero-shot setting, our model achieves over 30 ROUGE-L on WebNLG while all other baselines fail.
arXiv Detail & Related papers (2020-10-05T19:59:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.