Multitask Multi-database Emotion Recognition
- URL: http://arxiv.org/abs/2107.04127v2
- Date: Mon, 12 Jul 2021 15:36:55 GMT
- Title: Multitask Multi-database Emotion Recognition
- Authors: Manh Tu Vu, Marie Beurton-Aimar
- Abstract summary: We introduce our submission to the 2nd Affective Behavior Analysis in-the-wild (ABAW) 2021 competition.
We train a unified deep learning model on multi- databases to perform two tasks.
Experiment results show that the network have achieved promising results on the validation set of the AffWild2 database.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this work, we introduce our submission to the 2nd Affective Behavior
Analysis in-the-wild (ABAW) 2021 competition. We train a unified deep learning
model on multi-databases to perform two tasks: seven basic facial expressions
prediction and valence-arousal estimation. Since these databases do not
contains labels for all the two tasks, we have applied the distillation
knowledge technique to train two networks: one teacher and one student model.
The student model will be trained using both ground truth labels and soft
labels derived from the pretrained teacher model. During the training, we add
one more task, which is the combination of the two mentioned tasks, for better
exploiting inter-task correlations. We also exploit the sharing videos between
the two tasks of the AffWild2 database that is used in the competition, to
further improve the performance of the network. Experiment results shows that
the network have achieved promising results on the validation set of the
AffWild2 database. Code and pretrained model are publicly available at
https://github.com/glmanhtu/multitask-abaw-2021
Related papers
- Self-Training and Multi-Task Learning for Limited Data: Evaluation Study
on Object Detection [4.9914667450658925]
Experimental results show the improvement of performance when using a weak teacher with unseen data for training a multi-task student.
Despite the limited setup we believe the experimental results show the potential of multi-task knowledge distillation and self-training.
arXiv Detail & Related papers (2023-09-12T14:50:14Z) - Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - Multitask Emotion Recognition Model with Knowledge Distillation and Task
Discriminator [0.0]
We designed a multi-task model using ABAW dataset to predict emotions.
We trained model from the incomplete label by applying the knowledge distillation technique.
As a result we achieved 2.40 in Multi Task Learning task validation dataset.
arXiv Detail & Related papers (2022-03-24T13:50:48Z) - The Effect of Diversity in Meta-Learning [79.56118674435844]
Few-shot learning aims to learn representations that can tackle novel tasks given a small number of examples.
Recent studies show that task distribution plays a vital role in the model's performance.
We study different task distributions on a myriad of models and datasets to evaluate the effect of task diversity on meta-learning algorithms.
arXiv Detail & Related papers (2022-01-27T19:39:07Z) - Emotion Recognition with Incomplete Labels Using Modified Multi-task
Learning Technique [8.012391782839384]
We propose a method that utilizes the association between seven basic emotions and twelve action units from the AffWild2 dataset.
By combining the knowledge for two correlated tasks, both performances are improved by a large margin compared to those with the model employing only one kind of label.
arXiv Detail & Related papers (2021-07-09T03:43:53Z) - Feature Pyramid Network for Multi-task Affective Analysis [15.645791213312734]
We propose a novel model named feature pyramid networks for multi-task affect analysis.
The hierarchical features are extracted to predict three labels and we apply teacher-student training strategy to learn from pretrained single-task models.
arXiv Detail & Related papers (2021-07-08T08:10:04Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z) - Multitask Emotion Recognition with Incomplete Labels [7.811459544911892]
We train a unified model to perform three tasks: facial action unit detection, expression classification, and valence-arousal estimation.
Most existing datasets do not contain labels for all three tasks.
We find that most of the student models outperform their teacher model on all the three tasks.
arXiv Detail & Related papers (2020-02-10T05:32:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.