Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck
Muscle Contraction
- URL: http://arxiv.org/abs/2308.14841v1
- Date: Mon, 28 Aug 2023 18:58:01 GMT
- Title: Toward Optimized VR/AR Ergonomics: Modeling and Predicting User Neck
Muscle Contraction
- Authors: Yunxiang Zhang, Kenneth Chen, Qi Sun
- Abstract summary: We measure, model, and predict VR users' neck muscle contraction levels (MCL) while they move their heads to interact with the virtual environment.
We develop a bio-physically inspired computational model to predict neck MCL under diverse head kinematic states.
We hope this research will motivate new ergonomic-centered designs for VR/AR and interactive graphics applications.
- Score: 21.654553113159665
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Ergonomic efficiency is essential to the mass and prolonged adoption of VR/AR
experiences. While VR/AR head-mounted displays unlock users' natural wide-range
head movements during viewing, their neck muscle comfort is inevitably
compromised by the added hardware weight. Unfortunately, little quantitative
knowledge for understanding and addressing such an issue is available so far.
Leveraging electromyography devices, we measure, model, and predict VR users'
neck muscle contraction levels (MCL) while they move their heads to interact
with the virtual environment. Specifically, by learning from collected
physiological data, we develop a bio-physically inspired computational model to
predict neck MCL under diverse head kinematic states. Beyond quantifying the
cumulative MCL of completed head movements, our model can also predict
potential MCL requirements with target head poses only. A series of objective
evaluations and user studies demonstrate its prediction accuracy and
generality, as well as its ability in reducing users' neck discomfort by
optimizing the layout of visual targets. We hope this research will motivate
new ergonomic-centered designs for VR/AR and interactive graphics applications.
Source code is released at:
https://github.com/NYU-ICL/xr-ergonomics-neck-comfort.
Related papers
- Real-time Cross-modal Cybersickness Prediction in Virtual Reality [2.865152517440773]
Cybersickness remains a significant barrier to the widespread adoption of immersive virtual reality (VR) experiences.
We propose a lightweight model that processes bio-signal features and a PP-TSN network for video feature extraction.
Our model, trained with a lightweight framework, was validated on a public dataset containing eye and head tracking data, physiological data, and VR video, and demonstrated state-of-the-art performance in cybersickness prediction.
arXiv Detail & Related papers (2025-01-02T11:41:43Z) - Scaling Up Dynamic Human-Scene Interaction Modeling [58.032368564071895]
TRUMANS is the most comprehensive motion-captured HSI dataset currently available.
It intricately captures whole-body human motions and part-level object dynamics.
We devise a diffusion-based autoregressive model that efficiently generates HSI sequences of any length.
arXiv Detail & Related papers (2024-03-13T15:45:04Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Modelling Human Visual Motion Processing with Trainable Motion Energy
Sensing and a Self-attention Network [1.9458156037869137]
We propose an image-computable model of human motion perception by bridging the gap between biological and computer vision models.
This model architecture aims to capture the computations in V1-MT, the core structure for motion perception in the biological visual system.
In silico neurophysiology reveals that our model's unit responses are similar to mammalian neural recordings regarding motion pooling and speed tuning.
arXiv Detail & Related papers (2023-05-16T04:16:07Z) - VR-LENS: Super Learning-based Cybersickness Detection and Explainable
AI-Guided Deployment in Virtual Reality [1.9642496463491053]
This work presents an explainable artificial intelligence (XAI)-based framework VR-LENS for developing cybersickness detection ML models.
We first develop a novel super learning-based ensemble ML model for cybersickness detection.
Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets.
arXiv Detail & Related papers (2023-02-03T20:15:51Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Synthesizing Skeletal Motion and Physiological Signals as a Function of
a Virtual Human's Actions and Emotions [10.59409233835301]
We develop for the first time a system consisting of computational models for synchronously skeletal motion, electrocardiogram, blood pressure, respiration, and skin conductance signals.
The proposed framework is modular and allows the flexibility to experiment with different models.
In addition to facilitating ML research for round-the-clock monitoring at a reduced cost, the proposed framework will allow reusability of code and data.
arXiv Detail & Related papers (2021-02-08T21:56:15Z) - Augment Yourself: Mixed Reality Self-Augmentation Using Optical
See-through Head-mounted Displays and Physical Mirrors [49.49841698372575]
Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user.
We propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user.
Our system, to the best of our knowledge the first of its kind, estimates the user's pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather
arXiv Detail & Related papers (2020-07-06T16:53:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.