Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for
Children
- URL: http://arxiv.org/abs/2309.00138v1
- Date: Thu, 31 Aug 2023 21:22:00 GMT
- Title: Fuzzy Approach for Audio-Video Emotion Recognition in Computer Games for
Children
- Authors: Pavel Kozlov, Alisher Akram, Pakizar Shamoi
- Abstract summary: We propose a novel framework that integrates a fuzzy approach for the recognition of emotions through the analysis of audio and video data.
We use the FER dataset to detect facial emotions in video frames recorded from the screen during the game.
For the audio emotion recognition of sounds a kid produces during the game, we use CREMA-D, TESS, RAVDESS, and Savee datasets.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Computer games are widespread nowadays and enjoyed by people of all ages. But
when it comes to kids, playing these games can be more than just fun, it is a
way for them to develop important skills and build emotional intelligence.
Facial expressions and sounds that kids produce during gameplay reflect their
feelings, thoughts, and moods. In this paper, we propose a novel framework that
integrates a fuzzy approach for the recognition of emotions through the
analysis of audio and video data. Our focus lies within the specific context of
computer games tailored for children, aiming to enhance their overall user
experience. We use the FER dataset to detect facial emotions in video frames
recorded from the screen during the game. For the audio emotion recognition of
sounds a kid produces during the game, we use CREMA-D, TESS, RAVDESS, and Savee
datasets. Next, a fuzzy inference system is used for the fusion of results.
Besides this, our system can detect emotion stability and emotion diversity
during gameplay, which, together with prevailing emotion report, can serve as
valuable information for parents worrying about the effect of certain games on
their kids. The proposed approach has shown promising results in the
preliminary experiments we conducted, involving 3 different video games, namely
fighting, racing, and logic games, and providing emotion-tracking results for
kids in each game. Our study can contribute to the advancement of
child-oriented game development, which is not only engaging but also accounts
for children's cognitive and emotional states.
Related papers
- Audio-Driven Emotional 3D Talking-Head Generation [47.6666060652434]
We present a novel system for synthesizing high-fidelity, audio-driven video portraits with accurate emotional expressions.
We propose a pose sampling method that generates natural idle-state (non-speaking) videos in response to silent audio inputs.
arXiv Detail & Related papers (2024-10-07T08:23:05Z) - EmoFace: Audio-driven Emotional 3D Face Animation [3.573880705052592]
EmoFace is a novel audio-driven methodology for creating facial animations with vivid emotional dynamics.
Our approach can generate facial expressions with multiple emotions, and has the ability to generate random yet natural blinks and eye movements.
Our proposed methodology can be applied in producing dialogues animations of non-playable characters in video games, and driving avatars in virtual reality environments.
arXiv Detail & Related papers (2024-07-17T11:32:16Z) - Think out Loud: Emotion Deducing Explanation in Dialogues [57.90554323226896]
We propose a new task "Emotion Deducing Explanation in Dialogues" (EDEN)
EDEN recognizes emotion and causes in an explicitly thinking way.
It can help Large Language Models (LLMs) achieve better recognition of emotions and causes.
arXiv Detail & Related papers (2024-06-07T08:58:29Z) - The Emotional Impact of Game Duration: A Framework for Understanding Player Emotions in Extended Gameplay Sessions [3.082802504891278]
The purpose of this study is to look at how a player's emotions are affected by the duration of the game.
In comparison to shorter gameplay sessions, the experiment found that extended gameplay sessions did significantly affect the player's emotions.
arXiv Detail & Related papers (2024-03-31T02:01:05Z) - Facial Emotion Recognition in VR Games [2.5382095320488665]
We use a Convolutional Neural Network (CNN) to train a model to predict emotions in full-face images where the eyes and eyebrows are covered.
The model in these images can accurately recognize seven different emotions which are anger, happiness, disgust, fear, impartiality, sadness and surprise.
We analyzed the data collected from our experiment to understand which emotions players experience during the gameplay.
arXiv Detail & Related papers (2023-12-12T01:40:14Z) - How Would The Viewer Feel? Estimating Wellbeing From Video Scenarios [73.24092762346095]
We introduce two large-scale datasets with over 60,000 videos annotated for emotional response and subjective wellbeing.
The Video Cognitive Empathy dataset contains annotations for distributions of fine-grained emotional responses, allowing models to gain a detailed understanding of affective states.
The Video to Valence dataset contains annotations of relative pleasantness between videos, which enables predicting a continuous spectrum of wellbeing.
arXiv Detail & Related papers (2022-10-18T17:58:25Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Audio-Driven Emotional Video Portraits [79.95687903497354]
We present Emotional Video Portraits (EVP), a system for synthesizing high-quality video portraits with vivid emotional dynamics driven by audios.
Specifically, we propose the Cross-Reconstructed Emotion Disentanglement technique to decompose speech into two decoupled spaces.
With the disentangled features, dynamic 2D emotional facial landmarks can be deduced.
Then we propose the Target-Adaptive Face Synthesis technique to generate the final high-quality video portraits.
arXiv Detail & Related papers (2021-04-15T13:37:13Z) - Affect2MM: Affective Analysis of Multimedia Content Using Emotion
Causality [84.69595956853908]
We present Affect2MM, a learning method for time-series emotion prediction for multimedia content.
Our goal is to automatically capture the varying emotions depicted by characters in real-life human-centric situations and behaviors.
arXiv Detail & Related papers (2021-03-11T09:07:25Z) - Improved Digital Therapy for Developmental Pediatrics Using Domain-Specific Artificial Intelligence: Machine Learning Study [5.258326585054865]
Automated emotion classification could aid those who struggle to recognize emotions, including children with developmental behavioral conditions such as autism.
Most computer vision emotion recognition models are trained on adult emotion and therefore underperform when applied to child faces.
We designed a strategy to gamify the collection and labeling of child emotion-enriched images to boost the performance of automatic child emotion recognition models.
arXiv Detail & Related papers (2020-12-16T00:08:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.