Facial Emotion Recognition in VR Games
- URL: http://arxiv.org/abs/2312.06925v1
- Date: Tue, 12 Dec 2023 01:40:14 GMT
- Title: Facial Emotion Recognition in VR Games
- Authors: Fatemeh Dehghani, Loutfouz Zaman
- Abstract summary: We use a Convolutional Neural Network (CNN) to train a model to predict emotions in full-face images where the eyes and eyebrows are covered.
The model in these images can accurately recognize seven different emotions which are anger, happiness, disgust, fear, impartiality, sadness and surprise.
We analyzed the data collected from our experiment to understand which emotions players experience during the gameplay.
- Score: 2.5382095320488665
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Emotion detection is a crucial component of Games User Research (GUR), as it
allows game developers to gain insights into players' emotional experiences and
tailor their games accordingly. However, detecting emotions in Virtual Reality
(VR) games is challenging due to the Head-Mounted Display (HMD) that covers the
top part of the player's face, namely, their eyes and eyebrows, which provide
crucial information for recognizing the impression. To tackle this we used a
Convolutional Neural Network (CNN) to train a model to predict emotions in
full-face images where the eyes and eyebrows are covered. We used the FER2013
dataset, which we modified to cover eyes and eyebrows in images. The model in
these images can accurately recognize seven different emotions which are anger,
happiness, disgust, fear, impartiality, sadness and surprise.
We assessed the model's performance by testing it on two VR games and using
it to detect players' emotions. We collected self-reported emotion data from
the players after the gameplay sessions. We analyzed the data collected from
our experiment to understand which emotions players experience during the
gameplay. We found that our approach has the potential to enhance gameplay
analysis by enabling the detection of players' emotions in VR games, which can
help game developers create more engaging and immersive game experiences.
Related papers
- EmojiHeroVR: A Study on Facial Expression Recognition under Partial Occlusion from Head-Mounted Displays [4.095418032380801]
EmoHeVRDB (EmojiHeroVR Database) includes 3,556 labeled facial images of 1,778 reenacted emotions.
EmojiHeVRDB includes data on the activations of 63 facial expressions captured via the Meta Quest Pro VR headset.
Best model achieved an accuracy of 69.84% on the test set.
arXiv Detail & Related papers (2024-10-04T11:29:04Z) - EmoFace: Audio-driven Emotional 3D Face Animation [3.573880705052592]
EmoFace is a novel audio-driven methodology for creating facial animations with vivid emotional dynamics.
Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements.
Our proposed methodology can be applied in producing dialogues animations of non-playable characters in video games, and driving avatars in virtual reality environments.
arXiv Detail & Related papers (2024-07-17T11:32:16Z) - Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for
Children [0.0]
We propose a novel framework that integrates a fuzzy approach for the recognition of emotions through the analysis of audio and video data.
We use the FER dataset to detect facial emotions in video frames recorded from the screen during the game.
For the audio emotion recognition of sounds a kid produces during the game, we use CREMA-D, TESS, RAVDESS, and Savee datasets.
arXiv Detail & Related papers (2023-08-31T21:22:00Z) - EmoSet: A Large-scale Visual Emotion Dataset with Rich Attributes [53.95428298229396]
We introduce EmoSet, the first large-scale visual emotion dataset annotated with rich attributes.
EmoSet comprises 3.3 million images in total, with 118,102 of these images carefully labeled by human annotators.
Motivated by psychological studies, in addition to emotion category, each image is also annotated with a set of describable emotion attributes.
arXiv Detail & Related papers (2023-07-16T06:42:46Z) - Interpretable Explainability in Facial Emotion Recognition and
Gamification for Data Collection [0.0]
Training facial emotion recognition models requires large sets of data and costly annotation processes.
We developed a gamified method of acquiring annotated facial emotion data without an explicit labeling effort by humans.
We observed significant improvements in the facial emotion perception and expression skills of the players through repeated game play.
arXiv Detail & Related papers (2022-11-09T09:53:48Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Unmasking Communication Partners: A Low-Cost AI Solution for Digitally
Removing Head-Mounted Displays in VR-Based Telepresence [62.997667081978825]
Face-to-face conversation in Virtual Reality (VR) is a challenge when participants wear head-mounted displays (HMD)
Past research has shown that high-fidelity face reconstruction with personal avatars in VR is possible under laboratory conditions with high-cost hardware.
We propose one of the first low-cost systems for this task which uses only open source, free software and affordable hardware.
arXiv Detail & Related papers (2020-11-06T23:17:12Z) - Emotion Recognition From Gait Analyses: Current Research and Future
Directions [48.93172413752614]
gait conveys information about the walker's emotion.
The mapping between various emotions and gait patterns provides a new source for automated emotion recognition.
gait is remotely observable, more difficult to imitate, and requires less cooperation from the subject.
arXiv Detail & Related papers (2020-03-13T08:22:33Z) - I Feel I Feel You: A Theory of Mind Experiment in Games [1.857766632829209]
We focus on the perception of frustration as it is a prevalent affective experience in human-computer interaction.
We present a testbed game tailored towards this end, in which a player competes against an agent with a frustration model based on theory.
We examine the collected data through correlation analysis and predictive machine learning models, and find that the player's observable emotions are not correlated highly with the perceived frustration of the agent.
arXiv Detail & Related papers (2020-01-23T16:49:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.