Meta Transfer Learning for Emotion Recognition
- URL: http://arxiv.org/abs/2006.13211v1
- Date: Tue, 23 Jun 2020 00:25:28 GMT
- Title: Meta Transfer Learning for Emotion Recognition
- Authors: Dung Nguyen, Sridha Sridharan, Duc Thanh Nguyen, Simon Denman, David
Dean, Clinton Fookes
- Abstract summary: We propose a PathNet-based transfer learning method that is able to transfer emotional knowledge learned from one visual/audio emotion domain to another visual/audio emotion domain.
Our proposed system is capable of improving the performance of emotion recognition, making its performance substantially superior to the recent proposed fine-tuning/pre-trained models based transfer learning methods.
- Score: 42.61707533351803
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has been widely adopted in automatic emotion recognition and
has lead to significant progress in the field. However, due to insufficient
annotated emotion datasets, pre-trained models are limited in their
generalization capability and thus lead to poor performance on novel test sets.
To mitigate this challenge, transfer learning performing fine-tuning on
pre-trained models has been applied. However, the fine-tuned knowledge may
overwrite and/or discard important knowledge learned from pre-trained models.
In this paper, we address this issue by proposing a PathNet-based transfer
learning method that is able to transfer emotional knowledge learned from one
visual/audio emotion domain to another visual/audio emotion domain, and
transfer the emotional knowledge learned from multiple audio emotion domains
into one another to improve overall emotion recognition accuracy. To show the
robustness of our proposed system, various sets of experiments for facial
expression recognition and speech emotion recognition task on three emotion
datasets: SAVEE, EMODB, and eNTERFACE have been carried out. The experimental
results indicate that our proposed system is capable of improving the
performance of emotion recognition, making its performance substantially
superior to the recent proposed fine-tuning/pre-trained models based transfer
learning methods.
Related papers
- Learning Emotion Representations from Verbal and Nonverbal Communication [7.747924294389427]
We present EmotionCLIP, the first pre-training paradigm to extract visual emotion representations from verbal and nonverbal communication.
We guide EmotionCLIP to attend to nonverbal emotion cues through subject-aware context encoding and verbal emotion cues using sentiment-guided contrastive learning.
EmotionCLIP will address the prevailing issue of data scarcity in emotion understanding, thereby fostering progress in related domains.
arXiv Detail & Related papers (2023-05-22T21:36:55Z) - A cross-corpus study on speech emotion recognition [29.582678406878568]
This study investigates whether information learnt from acted emotions is useful for detecting natural emotions.
Four adult English datasets covering acted, elicited and natural emotions are considered.
A state-of-the-art model is proposed to accurately investigate the degradation of performance.
arXiv Detail & Related papers (2022-07-05T15:15:22Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Using Knowledge-Embedded Attention to Augment Pre-trained Language
Models for Fine-Grained Emotion Recognition [0.0]
We focus on improving fine-grained emotion recognition by introducing external knowledge into a pre-trained self-attention model.
Our results and error analyses outperform previous models on several datasets.
arXiv Detail & Related papers (2021-07-31T09:41:44Z) - Acted vs. Improvised: Domain Adaptation for Elicitation Approaches in
Audio-Visual Emotion Recognition [29.916609743097215]
Key challenges in developing generalized automatic emotion recognition systems include scarcity of labeled data and lack of gold-standard references.
In this work, we regard the emotion elicitation approach as domain knowledge, and explore domain transfer learning techniques on emotional utterances.
arXiv Detail & Related papers (2021-04-05T15:59:31Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.