Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data
- URL: http://arxiv.org/abs/2005.08341v1
- Date: Sun, 17 May 2020 18:59:57 GMT
- Title: Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data
- Authors: Diego Fabiano, Manikandan Jaishanker, and Shaun Canavan
- Abstract summary: We analyze 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition.
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition.
On the other hand, while action units can positively impact emotion recognition when fused with other modalities, the results suggest it is difficult to detect emotion using them in a unimodal fashion.
- Score: 4.617405932149653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To fully understand the complexities of human emotion, the integration of
multiple physical features from different modalities can be advantageous.
Considering this, we present an analysis of 3D facial data, action units, and
physiological data as it relates to their impact on emotion recognition. We
analyze each modality independently, as well as the fusion of each for
recognizing human emotion. This analysis includes which features are most
important for specific emotions (e.g. happy). Our analysis indicates that both
3D facial landmarks and physiological data are encouraging for
expression/emotion recognition. On the other hand, while action units can
positively impact emotion recognition when fused with other modalities, the
results suggest it is difficult to detect emotion using them in a unimodal
fashion.
Related papers
- Smile upon the Face but Sadness in the Eyes: Emotion Recognition based on Facial Expressions and Eye Behaviors [63.194053817609024]
We introduce eye behaviors as an important emotional cues for the creation of a new Eye-behavior-aided Multimodal Emotion Recognition dataset.
For the first time, we provide annotations for both Emotion Recognition (ER) and Facial Expression Recognition (FER) in the EMER dataset.
We specifically design a new EMERT architecture to concurrently enhance performance in both ER and FER.
arXiv Detail & Related papers (2024-11-08T04:53:55Z) - Exploring Emotions in Multi-componential Space using Interactive VR Games [1.1510009152620668]
We operationalised a data-driven approach using interactive Virtual Reality (VR) games.
We used Machine Learning (ML) methods to identify the unique contributions of each component to emotion differentiation.
These findings also have implications for using VR environments in emotion research.
arXiv Detail & Related papers (2024-04-04T06:54:44Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - A Multi-Componential Approach to Emotion Recognition and the Effect of
Personality [0.0]
This paper applies a componential framework with a data-driven approach to characterize emotional experiences evoked during movie watching.
The results suggest that differences between various emotions can be captured by a few (at least 6) latent dimensions.
Results show that a componential model with a limited number of descriptors is still able to predict the level of experienced discrete emotion.
arXiv Detail & Related papers (2020-10-22T01:27:23Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Emotion Recognition System from Speech and Visual Information based on
Convolutional Neural Networks [6.676572642463495]
We propose a system that is able to recognize emotions with a high accuracy rate and in real time.
In order to increase the accuracy of the recognition system, we analyze also the speech data and fuse the information coming from both sources.
arXiv Detail & Related papers (2020-02-29T22:09:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.