Can we truly transfer an actor's genuine happiness to avatars? An
investigation into virtual, real, posed and spontaneous faces
- URL: http://arxiv.org/abs/2312.02128v1
- Date: Mon, 4 Dec 2023 18:53:42 GMT
- Title: Can we truly transfer an actor's genuine happiness to avatars? An
investigation into virtual, real, posed and spontaneous faces
- Authors: Vitor Miguel Xavier Peres, Greice Pinho Dal Molin and Soraia Raupp
Musse
- Abstract summary: This study aims to evaluate Ekman's action units in datasets of real human faces, posed and spontaneous, and virtual human faces.
We also conducted a case study with specific movie characters, such as SheHulk and Genius.
This investigation can help several areas of knowledge, whether using real or virtual human beings, in education, health, entertainment, games, security, and even legal matters.
- Score: 0.7182245711235297
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A look is worth a thousand words is a popular phrase. And why is a simple
look enough to portray our feelings about something or someone? Behind this
question are the theoretical foundations of the field of psychology regarding
social cognition and the studies of psychologist Paul Ekman. Facial
expressions, as a form of non-verbal communication, are the primary way to
transmit emotions between human beings. The set of movements and expressions of
facial muscles that convey some emotional state of the individual to their
observers are targets of studies in many areas. Our research aims to evaluate
Ekman's action units in datasets of real human faces, posed and spontaneous,
and virtual human faces resulting from transferring real faces into Computer
Graphics faces. In addition, we also conducted a case study with specific movie
characters, such as SheHulk and Genius. We intend to find differences and
similarities in facial expressions between real and CG datasets, posed and
spontaneous faces, and also to consider the actors' genders in the videos. This
investigation can help several areas of knowledge, whether using real or
virtual human beings, in education, health, entertainment, games, security, and
even legal matters. Our results indicate that AU intensities are greater for
posed than spontaneous datasets, regardless of gender. Furthermore, there is a
smoothing of intensity up to 80 percent for AU6 and 45 percent for AU12 when a
real face is transformed into CG.
Related papers
- EmoFace: Audio-driven Emotional 3D Face Animation [3.573880705052592]
EmoFace is a novel audio-driven methodology for creating facial animations with vivid emotional dynamics.
Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements.
Our proposed methodology can be applied in producing dialogues animations of non-playable characters in video games, and driving avatars in virtual reality environments.
arXiv Detail & Related papers (2024-07-17T11:32:16Z) - Emotion Recognition for Challenged People Facial Appearance in Social
using Neural Network [0.0]
Face expression is used in CNN to categorize the acquired picture into different emotion categories.
This paper proposes an idea for face and enlightenment invariant credit of facial expressions by the images.
arXiv Detail & Related papers (2023-05-11T14:38:27Z) - Imitator: Personalized Speech-driven 3D Facial Animation [63.57811510502906]
State-of-the-art methods deform the face topology of the target actor to sync the input audio without considering the identity-specific speaking style and facial idiosyncrasies of the target actor.
We present Imitator, a speech-driven facial expression synthesis method, which learns identity-specific details from a short input video.
We show that our approach produces temporally coherent facial expressions from input audio while preserving the speaking style of the target actors.
arXiv Detail & Related papers (2022-12-30T19:00:02Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - Face Emotion Recognization Using Dataset Augmentation Based on Neural
Network [0.0]
Facial expression is one of the most external indications of a person's feelings and emotions.
It plays an important role in coordinating interpersonal relationships.
As a branch of the field of analyzing sentiment, facial expression recognition offers broad application prospects.
arXiv Detail & Related papers (2022-10-23T10:21:45Z) - Emotion Separation and Recognition from a Facial Expression by Generating the Poker Face with Vision Transformers [57.1091606948826]
We propose a novel FER model, named Poker Face Vision Transformer or PF-ViT, to address these challenges.
PF-ViT aims to separate and recognize the disturbance-agnostic emotion from a static facial image via generating its corresponding poker face.
PF-ViT utilizes vanilla Vision Transformers, and its components are pre-trained as Masked Autoencoders on a large facial expression dataset.
arXiv Detail & Related papers (2022-07-22T13:39:06Z) - Neural Emotion Director: Speech-preserving semantic control of facial
expressions in "in-the-wild" videos [31.746152261362777]
We introduce a novel deep learning method for photo-realistic manipulation of the emotional state of actors in "in-the-wild" videos.
The proposed method is based on a parametric 3D face representation of the actor in the input scene that offers a reliable disentanglement of the facial identity from the head pose and facial expressions.
It then uses a novel deep domain translation framework that alters the facial expressions in a consistent and plausible manner, taking into account their dynamics.
arXiv Detail & Related papers (2021-12-01T15:55:04Z) - I Only Have Eyes for You: The Impact of Masks On Convolutional-Based
Facial Expression Recognition [78.07239208222599]
We evaluate how the recently proposed FaceChannel adapts towards recognizing facial expressions from persons with masks.
We also perform specific feature-level visualization to demonstrate how the inherent capabilities of the FaceChannel to learn and combine facial features change when in a constrained social interaction scenario.
arXiv Detail & Related papers (2021-04-16T20:03:30Z) - Learning Emotional-Blinded Face Representations [77.7653702071127]
We propose two face representations that are blind to facial expressions associated to emotional responses.
This work is motivated by new international regulations for personal data protection.
arXiv Detail & Related papers (2020-09-18T09:24:10Z) - Audio- and Gaze-driven Facial Animation of Codec Avatars [149.0094713268313]
We describe the first approach to animate Codec Avatars in real-time using audio and/or eye tracking.
Our goal is to display expressive conversations between individuals that exhibit important social signals.
arXiv Detail & Related papers (2020-08-11T22:28:48Z) - Audio-driven Talking Face Video Generation with Learning-based
Personalized Head Pose [67.31838207805573]
We propose a deep neural network model that takes an audio signal A of a source person and a short video V of a target person as input.
We outputs a synthesized high-quality talking face video with personalized head pose.
Our method can generate high-quality talking face videos with more distinguishing head movement effects than state-of-the-art methods.
arXiv Detail & Related papers (2020-02-24T10:02:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.