A Naturalistic Database of Thermal Emotional Facial Expressions and
Effects of Induced Emotions on Memory
- URL: http://arxiv.org/abs/2203.15443v1
- Date: Tue, 29 Mar 2022 11:17:35 GMT
- Title: A Naturalistic Database of Thermal Emotional Facial Expressions and
Effects of Induced Emotions on Memory
- Authors: Anna Esposito, Vincenzo Capuano, Jiri Mekyska, Marcos Faundez-Zanuy
- Abstract summary: This work defines a procedure for collecting naturally induced emotional facial expressions through the vision of movie excerpts with high emotional contents.
The induced emotional states include the four basic emotions of sadness, disgust, happiness, and surprise.
The resulting database contains both thermal and visible emotional facial expressions.
- Score: 1.8352113484137629
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This work defines a procedure for collecting naturally induced emotional
facial expressions through the vision of movie excerpts with high emotional
contents and reports experimental data ascertaining the effects of emotions on
memory word recognition tasks. The induced emotional states include the four
basic emotions of sadness, disgust, happiness, and surprise, as well as the
neutral emotional state. The resulting database contains both thermal and
visible emotional facial expressions, portrayed by forty Italian subjects and
simultaneously acquired by appropriately synchronizing a thermal and a standard
visible camera. Each subject's recording session lasted 45 minutes, allowing
for each mode (thermal or visible) to collect a minimum of 2000 facial
expressions from which a minimum of 400 were selected as highly expressive of
each emotion category. The database is available to the scientific community
and can be obtained contacting one of the authors. For this pilot study, it was
found that emotions and/or emotion categories do not affect individual
performance on memory word recognition tasks and temperature changes in the
face or in some regions of it do not discriminate among emotional states.
Related papers
- Disentangle Identity, Cooperate Emotion: Correlation-Aware Emotional Talking Portrait Generation [63.94836524433559]
DICE-Talk is a framework for disentangling identity with emotion and cooperating emotions with similar characteristics.
We develop a disentangled emotion embedder that jointly models audio-visual emotional cues through cross-modal attention.
Second, we introduce a correlation-enhanced emotion conditioning module with learnable Emotion Banks.
Third, we design an emotion discrimination objective that enforces affective consistency during the diffusion process.
arXiv Detail & Related papers (2025-04-25T05:28:21Z) - Daisy-TTS: Simulating Wider Spectrum of Emotions via Prosody Embedding Decomposition [12.605375307094416]
We propose an emotional text-to-speech design to simulate a wider spectrum of emotions grounded on the structural model.
Our proposed design, Daisy-TTS, incorporates a prosody encoder to learn emotionally-separable prosody embedding as a proxy for emotion.
arXiv Detail & Related papers (2024-02-22T13:15:49Z) - emotion2vec: Self-Supervised Pre-Training for Speech Emotion
Representation [42.29118614670941]
We propose emotion2vec, a universal speech emotion representation model.
emotion2vec is pre-trained on unlabeled emotion data through self-supervised online distillation.
It outperforms state-of-the-art pre-trained universal models and emotion specialist models.
arXiv Detail & Related papers (2023-12-23T07:46:55Z) - Face Emotion Recognization Using Dataset Augmentation Based on Neural
Network [0.0]
Facial expression is one of the most external indications of a person's feelings and emotions.
It plays an important role in coordinating interpersonal relationships.
As a branch of the field of analyzing sentiment, facial expression recognition offers broad application prospects.
arXiv Detail & Related papers (2022-10-23T10:21:45Z) - Speech Synthesis with Mixed Emotions [77.05097999561298]
We propose a novel formulation that measures the relative difference between the speech samples of different emotions.
We then incorporate our formulation into a sequence-to-sequence emotional text-to-speech framework.
At run-time, we control the model to produce the desired emotion mixture by manually defining an emotion attribute vector.
arXiv Detail & Related papers (2022-08-11T15:45:58Z) - Emotion Intensity and its Control for Emotional Voice Conversion [77.05097999561298]
Emotional voice conversion (EVC) seeks to convert the emotional state of an utterance while preserving the linguistic content and speaker identity.
In this paper, we aim to explicitly characterize and control the intensity of emotion.
We propose to disentangle the speaker style from linguistic content and encode the speaker style into a style embedding in a continuous space that forms the prototype of emotion embedding.
arXiv Detail & Related papers (2022-01-10T02:11:25Z) - Emotion Recognition under Consideration of the Emotion Component Process
Model [9.595357496779394]
We use the emotion component process model (CPM) by Scherer (2005) to explain emotion communication.
CPM states that emotions are a coordinated process of various subcomponents, in reaction to an event, namely the subjective feeling, the cognitive appraisal, the expression, a physiological bodily reaction, and a motivational action tendency.
We find that emotions on Twitter are predominantly expressed by event descriptions or subjective reports of the feeling, while in literature, authors prefer to describe what characters do, and leave the interpretation to the reader.
arXiv Detail & Related papers (2021-07-27T15:53:25Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Emotion pattern detection on facial videos using functional statistics [62.997667081978825]
We propose a technique based on Functional ANOVA to extract significant patterns of face muscles movements.
We determine if there are time-related differences on expressions among emotional groups by using a functional F-test.
arXiv Detail & Related papers (2021-03-01T08:31:08Z) - Facial Expression Editing with Continuous Emotion Labels [76.36392210528105]
Deep generative models have achieved impressive results in the field of automated facial expression editing.
We propose a model that can be used to manipulate facial expressions in facial images according to continuous two-dimensional emotion labels.
arXiv Detail & Related papers (2020-06-22T13:03:02Z) - Impact of multiple modalities on emotion recognition: investigation into
3d facial landmarks, action units, and physiological data [4.617405932149653]
We analyze 3D facial data, action units, and physiological data as it relates to their impact on emotion recognition.
Our analysis indicates that both 3D facial landmarks and physiological data are encouraging for expression/emotion recognition.
On the other hand, while action units can positively impact emotion recognition when fused with other modalities, the results suggest it is difficult to detect emotion using them in a unimodal fashion.
arXiv Detail & Related papers (2020-05-17T18:59:57Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.