Towards an Emotion-Aware Metaverse: A Human-Centric Shipboard Fire Drill Simulator
- URL: http://arxiv.org/abs/2503.03570v1
- Date: Wed, 05 Mar 2025 14:58:53 GMT
- Title: Towards an Emotion-Aware Metaverse: A Human-Centric Shipboard Fire Drill Simulator
- Authors: Musaab H. Hamed-Ahmed, Diego Ramil-López, Paula Fraga-Lamas, Tiago M. Fernández-Caramés,
- Abstract summary: This article presents an emotion-aware Metaverse application: a Virtual Reality (VR) fire drill simulator designed to prepare crews for shipboard emergencies.<n>The simulator detects emotions in real time, assessing trainees responses under stress to improve learning outcomes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional XR and Metaverse applications prioritize user experience (UX) for adoption and success but often overlook a crucial aspect of user interaction: emotions. This article addresses this gap by presenting an emotion-aware Metaverse application: a Virtual Reality (VR) fire drill simulator designed to prepare crews for shipboard emergencies. The simulator detects emotions in real time, assessing trainees responses under stress to improve learning outcomes. Its architecture incorporates eye-tracking and facial expression analysis via Meta Quest Pro headsets. The system features four levels whose difficulty is increased progressively to evaluate user decision-making and emotional resilience. The system was evaluated in two experimental phases. The first phase identified challenges, such as navigation issues and lack of visual guidance. These insights led to an improved second version with a better user interface, visual cues and a real-time task tracker. Performance metrics like completion times, task efficiency and emotional responses were analyzed. The obtained results show that trainees with prior VR or gaming experience navigated the scenarios more efficiently. Moreover, the addition of task-tracking visuals and navigation guidance significantly improved user performance, reducing task completion times between 14.18\% and 32.72\%. Emotional responses were captured, revealing that some participants were engaged, while others acted indifferently, indicating the need for more immersive elements. Overall, this article provides useful guidelines for creating the next generation of emotion-aware Metaverse applications.
Related papers
- Emotion Recognition Using Convolutional Neural Networks [11.243571725357823]
We develop an emotion recognition system that can apply emotion recognition on both still images and real-time videos by using deep learning.
The proposed system is tested on 2 different datasets, and achieved an accuracy of over 80%.
arXiv Detail & Related papers (2025-04-03T20:08:32Z) - Identifying User Goals from UI Trajectories [19.492331502146886]
We propose a new task goal identification from observed UI trajectories.<n>We also introduce a novel evaluation methodology designed to assess whether two intent descriptions can be considered paraphrases.<n>To benchmark this task, we compare the performance of humans and state-of-the-art models, specifically GPT-4 and Gemini-1.5 Pro.
arXiv Detail & Related papers (2024-06-20T13:46:10Z) - Facial Emotion Recognition in VR Games [2.5382095320488665]
We use a Convolutional Neural Network (CNN) to train a model to predict emotions in full-face images where the eyes and eyebrows are covered.
The model in these images can accurately recognize seven different emotions which are anger, happiness, disgust, fear, impartiality, sadness and surprise.
We analyzed the data collected from our experiment to understand which emotions players experience during the gameplay.
arXiv Detail & Related papers (2023-12-12T01:40:14Z) - An Approach for Improving Automatic Mouth Emotion Recognition [1.5293427903448025]
The study proposes and tests a technique for automated emotion recognition through mouth detection via Convolutional Neural Networks (CNN)
The technique is meant to be applied for supporting people with health disorders with communication skills issues.
arXiv Detail & Related papers (2022-12-12T16:17:21Z) - SS-VAERR: Self-Supervised Apparent Emotional Reaction Recognition from
Video [61.21388780334379]
This work focuses on the apparent emotional reaction recognition from the video-only input, conducted in a self-supervised fashion.
The network is first pre-trained on different self-supervised pretext tasks and later fine-tuned on the downstream target task.
arXiv Detail & Related papers (2022-10-20T15:21:51Z) - Towards self-attention based visual navigation in the real world [0.0]
Vision guided navigation requires processing complex visual information to inform task-orientated decisions.
Deep Reinforcement Learning agents trained in simulation often exhibit unsatisfactory results when deployed in the real-world.
This is the first demonstration of a self-attention based agent successfully trained in navigating a 3D action space, using less than 4000 parameters.
arXiv Detail & Related papers (2022-09-15T04:51:42Z) - First Contact: Unsupervised Human-Machine Co-Adaptation via Mutual
Information Maximization [112.40598205054994]
We formalize this idea as a completely unsupervised objective for optimizing interfaces.
We conduct an observational study on 540K examples of users operating various keyboard and eye gaze interfaces for typing, controlling simulated robots, and playing video games.
The results show that our mutual information scores are predictive of the ground-truth task completion metrics in a variety of domains.
arXiv Detail & Related papers (2022-05-24T21:57:18Z) - Real-time Emotion and Gender Classification using Ensemble CNN [0.0]
This paper is the implementation of an Ensemble CNN for building a real-time system that can detect emotion and gender of the person.
Our work can predict emotion and gender on single face images as well as multiple face images.
arXiv Detail & Related papers (2021-11-15T13:51:35Z) - Explore before Moving: A Feasible Path Estimation and Memory Recalling
Framework for Embodied Navigation [117.26891277593205]
We focus on the navigation and solve the problem of existing navigation algorithms lacking experience and common sense.
Inspired by the human ability to think twice before moving and conceive several feasible paths to seek a goal in unfamiliar scenes, we present a route planning method named Path Estimation and Memory Recalling framework.
We show strong experimental results of PEMR on the EmbodiedQA navigation task.
arXiv Detail & Related papers (2021-10-16T13:30:55Z) - Computational Emotion Analysis From Images: Recent Advances and Future
Directions [79.05003998727103]
In this chapter, we aim to introduce image emotion analysis (IEA) from a computational perspective.
We begin with commonly used emotion representation models from psychology.
We then define the key computational problems that the researchers have been trying to solve.
arXiv Detail & Related papers (2021-03-19T13:33:34Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z) - ProxEmo: Gait-based Emotion Learning and Multi-view Proxemic Fusion for
Socially-Aware Robot Navigation [65.11858854040543]
We present ProxEmo, a novel end-to-end emotion prediction algorithm for robot navigation among pedestrians.
Our approach predicts the perceived emotions of a pedestrian from walking gaits, which is then used for emotion-guided navigation.
arXiv Detail & Related papers (2020-03-02T17:47:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.