Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learning
- URL: http://arxiv.org/abs/2403.19014v1
- Date: Wed, 27 Mar 2024 21:14:17 GMT
- Title: Thelxinoë: Recognizing Human Emotions Using Pupillometry and Machine Learning
- Authors: Darlene Barker, Haim Levkowitz,
- Abstract summary: This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this study, we present a method for emotion recognition in Virtual Reality (VR) using pupillometry. We analyze pupil diameter responses to both visual and auditory stimuli via a VR headset and focus on extracting key features in the time-domain, frequency-domain, and time-frequency domain from VR generated data. Our approach utilizes feature selection to identify the most impactful features using Maximum Relevance Minimum Redundancy (mRMR). By applying a Gradient Boosting model, an ensemble learning technique using stacked decision trees, we achieve an accuracy of 98.8% with feature engineering, compared to 84.9% without it. This research contributes significantly to the Thelxino\"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions. Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
Related papers
- Exploring Eye Tracking to Detect Cognitive Load in Complex Virtual Reality Training [11.83314968015781]
We present an ongoing study to detect users' cognitive load using an eye-tracking-based machine learning approach.
We developed a VR training system for cold spray and tested it with 22 participants.
Preliminary analysis demonstrates the feasibility of using eye-tracking to detect cognitive load in complex VR experiences.
arXiv Detail & Related papers (2024-11-18T16:44:19Z) - Tremor Reduction for Accessible Ray Based Interaction in VR Applications [0.0]
Many traditional 2D interface interaction methods have been directly converted to work in a VR space with little alteration to the input mechanism.
In this paper we propose the use of a low pass filter, to normalize user input noise, alleviating fine motor requirements during ray-based interaction.
arXiv Detail & Related papers (2024-05-12T17:07:16Z) - VR interaction for efficient virtual manufacturing: mini map for
multi-user VR navigation platform [0.0]
This paper focuses on interactive positioning maps for virtual factory layout planning and exploring the user interaction design of digital maps as navigation aid.
Five different prototypes of interactive maps were tested, evaluated and graded by the 20 participants and 40 validated data streams collected.
The most efficient interaction design of interactive maps is thus analyzed and discussed in the study.
arXiv Detail & Related papers (2023-12-29T13:09:54Z) - Emotion Based Prediction in the Context of Optimized Trajectory Planning
for Immersive Learning [0.0]
In the virtual elements of immersive learning, the use of Google Expedition and touch-screen-based emotion are examined.
Pedagogical application, affordances, and cognitive load are the corresponding measures that are involved.
arXiv Detail & Related papers (2023-12-18T09:24:35Z) - Multisensory extended reality applications offer benefits for volumetric biomedical image analysis in research and medicine [2.46537907738351]
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine.
Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices.
In this study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications.
arXiv Detail & Related papers (2023-11-07T13:37:47Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Wireless Edge-Empowered Metaverse: A Learning-Based Incentive Mechanism
for Virtual Reality [102.4151387131726]
We propose a learning-based Incentive Mechanism framework for VR services in the Metaverse.
First, we propose the quality of perception as the metric for VR users in the virtual world.
Second, for quick trading of VR services between VR users (i.e., buyers) and VR SPs (i.e., sellers), we design a double Dutch auction mechanism.
Third, for auction communication reduction, we design a deep reinforcement learning-based auctioneer to accelerate this auction process.
arXiv Detail & Related papers (2021-11-07T13:02:52Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.