Emotion pattern detection on facial videos using functional statistics
- URL: http://arxiv.org/abs/2103.00844v1
- Date: Mon, 1 Mar 2021 08:31:08 GMT
- Title: Emotion pattern detection on facial videos using functional statistics
- Authors: Rongjiao Ji, Alessandra Micheletti, Natasa Krklec Jerinkic, Zoranka
Desnica
- Abstract summary: We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
- Score: 62.997667081978825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There is an increasing scientific interest in automatically analysing and
understanding human behavior, with particular reference to the evolution of
facial expressions and the recognition of the corresponding emotions. In this
paper we propose a technique based on Functional ANOVA to extract significant
patterns of face muscles movements, in order to identify the emotions expressed
by actors in recorded videos. We determine if there are time-related
differences on expressions among emotional groups by using a functional F-test.
Such results are the first step towards the construction of a reliable
automatic emotion recognition system
Related papers
- Leveraging Previous Facial Action Units Knowledge for Emotion
Recognition on Faces [2.4158349218144393]
We propose the usage of Facial Action Units (AUs) recognition techniques to recognize emotions.
This recognition will be based on the Facial Action Coding System (FACS) and computed by a machine learning system.
arXiv Detail & Related papers (2023-11-20T18:14:53Z) - Multi-Cue Adaptive Emotion Recognition Network [4.570705738465714]
We propose a new deep learning approach for emotion recognition based on adaptive multi-cues.
We compare the proposed approach with the state-of-art approaches in the CAER-S dataset.
arXiv Detail & Related papers (2021-11-03T15:08:55Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Detection of Genuine and Posed Facial Expressions of Emotion: A Review [14.017423779272617]
Discrimination of genuine (spontaneous) expressions from posed(deliberate/volitional/deceptive) ones is a crucial yet challenging task in facial expression understanding.
This paper presents a general review of the relevant research, including several spontaneous vs. posed (SVP) facial expression databases and various computer vision based detection methods.
arXiv Detail & Related papers (2020-08-26T02:49:32Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z) - Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data [4.617405932149653]
We analyze 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition.
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition.
On the other hand, while action units can positively impact emotion recognition when fused with other modalities, the results suggest it is difficult to detect emotion using them in a unimodal fashion.
arXiv Detail & Related papers (2020-05-17T18:59:57Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.