Multi-Cue Adaptive Emotion Recognition Network
- URL: http://arxiv.org/abs/2111.02273v1
- Date: Wed, 3 Nov 2021 15:08:55 GMT
- Title: Multi-Cue Adaptive Emotion Recognition Network
- Authors: Willams Costa, David Mac\^edo, Cleber Zanchettin, Lucas S. Figueiredo
and Veronica Teichrieb
- Abstract summary: We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
- Score: 4.570705738465714
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Expressing and identifying emotions through facial and physical expressions
is a significant part of social interaction. Emotion recognition is an
essential task in computer vision due to its various applications and mainly
for allowing a more natural interaction between humans and machines. The common
approaches for emotion recognition focus on analyzing facial expressions and
requires the automatic localization of the face in the image. Although these
methods can correctly classify emotion in controlled scenarios, such techniques
are limited when dealing with unconstrained daily interactions. We propose a
new deep learning approach for emotion recognition based on adaptive multi-cues
that extract information from context and body poses, which humans commonly use
in social interaction and communication. We compare the proposed approach with
the state-of-art approaches in the CAER-S dataset, evaluating different
components in a pipeline that reached an accuracy of 89.30%
Related papers
- In-Depth Analysis of Emotion Recognition through Knowledge-Based Large Language Models [3.8153944233011385]
This paper contributes to the emerging field of context-based emotion recognition.
We propose an approach that combines emotion recognition methods with Bayesian Cue Integration.
We test this approach in the context of interpreting facial expressions during a social task, the prisoner's dilemma.
arXiv Detail & Related papers (2024-07-17T06:39:51Z) - Exploring Emotions in Multi-componential Space using Interactive VR Games [1.1510009152620668]
We operationalised a data-driven approach using interactive Virtual Reality (VR) games.
We used Machine Learning (ML) methods to identify the unique contributions of each component to emotion differentiation.
These findings also have implications for using VR environments in emotion research.
arXiv Detail & Related papers (2024-04-04T06:54:44Z) - Leveraging Previous Facial Action Units Knowledge for Emotion
Recognition on Faces [2.4158349218144393]
We propose the usage of Facial Action Units (AUs) recognition techniques to recognize emotions.
This recognition will be based on the Facial Action Coding System (FACS) and computed by a machine learning system.
arXiv Detail & Related papers (2023-11-20T18:14:53Z) - Face Emotion Recognization Using Dataset Augmentation Based on Neural
Network [0.0]
Facial expression is one of the most external indications of a person's feelings and emotions.
It plays an important role in coordinating interpersonal relationships.
As a branch of the field of analyzing sentiment, facial expression recognition offers broad application prospects.
arXiv Detail & Related papers (2022-10-23T10:21:45Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Temporal aggregation of audio-visual modalities for emotion recognition [0.5352699766206808]
We propose a multimodal fusion technique for emotion recognition based on combining audio-visual modalities from a temporal window with different temporal offsets for each modality.
Our proposed method outperforms other methods from the literature and human accuracy rating.
arXiv Detail & Related papers (2020-07-08T18:44:15Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.