Emotion Recognition with Incomplete Labels Using Modified Multi-task
Learning Technique
- URL: http://arxiv.org/abs/2107.04192v1
- Date: Fri, 9 Jul 2021 03:43:53 GMT
- Title: Emotion Recognition with Incomplete Labels Using Modified Multi-task
Learning Technique
- Authors: Phan Tran Dac Thinh, Hoang Manh Hung, Hyung-Jeong Yang, Soo-Hyung Kim,
and Guee-Sang Lee
- Abstract summary: We propose a method that utilizes the association between seven basic emotions and twelve action units from the AffWild2 dataset.
By combining the knowledge for two correlated tasks, both performances are improved by a large margin compared to those with the model employing only one kind of label.
- Score: 8.012391782839384
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The task of predicting affective information in the wild such as seven basic
emotions or action units from human faces has gradually become more interesting
due to the accessibility and availability of massive annotated datasets. In
this study, we propose a method that utilizes the association between seven
basic emotions and twelve action units from the AffWild2 dataset. The method
based on the architecture of ResNet50 involves the multi-task learning
technique for the incomplete labels of the two tasks. By combining the
knowledge for two correlated tasks, both performances are improved by a large
margin compared to those with the model employing only one kind of label.
Related papers
- The impact of Compositionality in Zero-shot Multi-label action recognition for Object-based tasks [4.971065912401385]
We propose Dual-VCLIP, a unified approach for zero-shot multi-label action recognition.
Dual-VCLIP enhances VCLIP, a zero-shot action recognition method, with the DualCoOp method for multi-label image classification.
We validate our method on the Charades dataset that includes a majority of object-based actions.
arXiv Detail & Related papers (2024-05-14T15:28:48Z) - PDT: Pretrained Dual Transformers for Time-aware Bipartite Graphs [30.71271242109731]
We propose a pre-training method that learns a bi-directional mapping between the spaces of the user-side and the content-side.
We evaluate the proposed method for the recommendation task.
arXiv Detail & Related papers (2023-06-02T20:38:43Z) - Pre-training Multi-task Contrastive Learning Models for Scientific
Literature Understanding [52.723297744257536]
Pre-trained language models (LMs) have shown effectiveness in scientific literature understanding tasks.
We propose a multi-task contrastive learning framework, SciMult, to facilitate common knowledge sharing across different literature understanding tasks.
arXiv Detail & Related papers (2023-05-23T16:47:22Z) - Relational Multi-Task Learning: Modeling Relations between Data and
Tasks [84.41620970886483]
Key assumption in multi-task learning is that at the inference time the model only has access to a given data point but not to the data point's labels from other tasks.
Here we introduce a novel relational multi-task learning setting where we leverage data point labels from auxiliary tasks to make more accurate predictions.
We develop MetaLink, where our key innovation is to build a knowledge graph that connects data points and tasks.
arXiv Detail & Related papers (2023-03-14T07:15:41Z) - Self-Supervised Graph Neural Network for Multi-Source Domain Adaptation [51.21190751266442]
Domain adaptation (DA) tries to tackle the scenarios when the test data does not fully follow the same distribution of the training data.
By learning from large-scale unlabeled samples, self-supervised learning has now become a new trend in deep learning.
We propose a novel textbfSelf-textbfSupervised textbfGraph Neural Network (SSG) to enable more effective inter-task information exchange and knowledge sharing.
arXiv Detail & Related papers (2022-04-08T03:37:56Z) - A Novel Multi-Task Learning Method for Symbolic Music Emotion
Recognition [76.65908232134203]
Symbolic Music Emotion Recognition(SMER) is to predict music emotion from symbolic data, such as MIDI and MusicXML.
In this paper, we present a simple multi-task framework for SMER, which incorporates the emotion recognition task with other emotion-related auxiliary tasks.
arXiv Detail & Related papers (2022-01-15T07:45:10Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - Multitask Multi-database Emotion Recognition [1.52292571922932]
We introduce our submission to the 2nd Affective Behavior Analysis in-the-wild (ABAW) 2021 competition.
We train a unified deep learning model on multi- databases to perform two tasks.
Experiment results show that the network have achieved promising results on the validation set of the AffWild2 database.
arXiv Detail & Related papers (2021-07-08T21:57:58Z) - Weakly Supervised Multi-task Learning for Concept-based Explainability [3.441021278275805]
We leverage multi-task learning to train a neural network that jointly learns to predict a decision task.
There are two main challenges to overcome: concept label scarcity and the joint learning.
We show it is possible to improve performance at both tasks by combining labels of heterogeneous quality.
arXiv Detail & Related papers (2021-04-26T10:42:19Z) - Adaptive Self-training for Few-shot Neural Sequence Labeling [55.43109437200101]
We develop techniques to address the label scarcity challenge for neural sequence labeling models.
Self-training serves as an effective mechanism to learn from large amounts of unlabeled data.
meta-learning helps in adaptive sample re-weighting to mitigate error propagation from noisy pseudo-labels.
arXiv Detail & Related papers (2020-10-07T22:29:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.