Computer Vision Estimation of Emotion Reaction Intensity in the Wild
- URL: http://arxiv.org/abs/2303.10741v2
- Date: Thu, 3 Aug 2023 00:21:03 GMT
- Title: Computer Vision Estimation of Emotion Reaction Intensity in the Wild
- Authors: Yang Qian, Ali Kargarandehkordi, Onur Cezmi Mutlu, Saimourya Surabhi,
Mohammadmahdi Honarmand, Dennis Paul Wall, Peter Washington
- Abstract summary: We describe our submission to the newly introduced Emotional Reaction Intensity (ERI) Estimation challenge.
We developed four deep neural networks trained in the visual domain and a multimodal model trained with both visual and audio features to predict emotion reaction intensity.
- Score: 1.5481864635049696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions play an essential role in human communication. Developing computer
vision models for automatic recognition of emotion expression can aid in a
variety of domains, including robotics, digital behavioral healthcare, and
media analytics. There are three types of emotional representations which are
traditionally modeled in affective computing research: Action Units, Valence
Arousal (VA), and Categorical Emotions. As part of an effort to move beyond
these representations towards more fine-grained labels, we describe our
submission to the newly introduced Emotional Reaction Intensity (ERI)
Estimation challenge in the 5th competition for Affective Behavior Analysis
in-the-Wild (ABAW). We developed four deep neural networks trained in the
visual domain and a multimodal model trained with both visual and audio
features to predict emotion reaction intensity. Our best performing model on
the Hume-Reaction dataset achieved an average Pearson correlation coefficient
of 0.4080 on the test set using a pre-trained ResNet50 model. This work
provides a first step towards the development of production-grade models which
predict emotion reaction intensities rather than discrete emotion categories.
Related papers
- Emotion Detection through Body Gesture and Face [0.0]
The project addresses the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures.
Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language.
The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way.
arXiv Detail & Related papers (2024-07-13T15:15:50Z) - Language Models (Mostly) Do Not Consider Emotion Triggers When Predicting Emotion [87.18073195745914]
We investigate how well human-annotated emotion triggers correlate with features deemed salient in their prediction of emotions.
Using EmoTrigger, we evaluate the ability of large language models to identify emotion triggers.
Our analysis reveals that emotion triggers are largely not considered salient features for emotion prediction models, instead there is intricate interplay between various features and the task of emotion detection.
arXiv Detail & Related papers (2023-11-16T06:20:13Z) - Implicit Design Choices and Their Impact on Emotion Recognition Model
Development and Evaluation [5.534160116442057]
The subjectivity of emotions poses significant challenges in developing accurate and robust computational models.
This thesis examines critical facets of emotion recognition, beginning with the collection of diverse datasets.
To handle the challenge of non-representative training data, this work collects the Multimodal Stressed Emotion dataset.
arXiv Detail & Related papers (2023-09-06T02:45:42Z) - Mutilmodal Feature Extraction and Attention-based Fusion for Emotion
Estimation in Videos [16.28109151595872]
We introduce our submission to the CVPR 2023 Competition on Affective Behavior Analysis in-the-wild (ABAW)
We exploited multimodal features extracted from video of different lengths from the competition dataset, including audio, pose and images.
Our system achieves the performance of 0.361 on the validation dataset.
arXiv Detail & Related papers (2023-03-18T14:08:06Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Modality-Transferable Emotion Embeddings for Low-Resource Multimodal
Emotion Recognition [55.44502358463217]
We propose a modality-transferable model with emotion embeddings to tackle the aforementioned issues.
Our model achieves state-of-the-art performance on most of the emotion categories.
Our model also outperforms existing baselines in the zero-shot and few-shot scenarios for unseen emotions.
arXiv Detail & Related papers (2020-09-21T06:10:39Z) - The BIRAFFE2 Experiment. Study in Bio-Reactions and Faces for
Emotion-based Personalization for AI Systems [0.0]
We present an unified paradigm allowing to capture emotional responses of different persons.
We provide a framework that can be easily used and developed for the purpose of the machine learning methods.
arXiv Detail & Related papers (2020-07-29T18:35:34Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.