Force-Aware Interface via Electromyography for Natural VR/AR Interaction
- URL: http://arxiv.org/abs/2210.01225v1
- Date: Mon, 3 Oct 2022 20:51:25 GMT
- Title: Force-Aware Interface via Electromyography for Natural VR/AR Interaction
- Authors: Yunxiang Zhang, Benjamin Liang, Boyuan Chen, Paul Torrens, S. Farokh
Atashzar, Dahua Lin, Qi Sun
- Abstract summary: We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
- Score: 69.1332992637271
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While tremendous advances in visual and auditory realism have been made for
virtual and augmented reality (VR/AR), introducing a plausible sense of
physicality into the virtual world remains challenging. Closing the gap between
real-world physicality and immersive virtual experience requires a closed
interaction loop: applying user-exerted physical forces to the virtual
environment and generating haptic sensations back to the users. However,
existing VR/AR solutions either completely ignore the force inputs from the
users or rely on obtrusive sensing devices that compromise user experience.
By identifying users' muscle activation patterns while engaging in VR/AR, we
design a learning-based neural interface for natural and intuitive force
inputs. Specifically, we show that lightweight electromyography sensors,
resting non-invasively on users' forearm skin, inform and establish a robust
understanding of their complex hand activities. Fuelled by a
neural-network-based model, our interface can decode finger-wise forces in
real-time with 3.3% mean error, and generalize to new users with little
calibration. Through an interactive psychophysical study, we show that human
perception of virtual objects' physical properties, such as stiffness, can be
significantly enhanced by our interface. We further demonstrate that our
interface enables ubiquitous control via finger tapping. Ultimately, we
envision our findings to push forward research towards more realistic
physicality in future VR/AR.
Related papers
- Haptic Repurposing with GenAI [5.424247121310253]
Mixed Reality aims to merge the digital and physical worlds to create immersive human-computer interactions.
This paper introduces Haptic Repurposing with GenAI, an innovative approach to enhance MR interactions by transforming any physical objects into adaptive haptic interfaces for AI-generated virtual assets.
arXiv Detail & Related papers (2024-06-11T13:06:28Z) - Tremor Reduction for Accessible Ray Based Interaction in VR Applications [0.0]
Many traditional 2D interface interaction methods have been directly converted to work in a VR space with little alteration to the input mechanism.
In this paper we propose the use of a low pass filter, to normalize user input noise, alleviating fine motor requirements during ray-based interaction.
arXiv Detail & Related papers (2024-05-12T17:07:16Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Reconfigurable Data Glove for Reconstructing Physical and Virtual Grasps [100.72245315180433]
We present a reconfigurable data glove design to capture different modes of human hand-object interactions.
The glove operates in three modes for various downstream tasks with distinct features.
We evaluate the system's three modes by (i) recording hand gestures and associated forces, (ii) improving manipulation fluency in VR, and (iii) producing realistic simulation effects of various tool uses.
arXiv Detail & Related papers (2023-01-14T05:35:50Z) - Cross-Reality Re-Rendering: Manipulating between Digital and Physical
Realities [2.538209532048867]
We investigate the design of a system that enables users to manipulate the perception of both their physical realities and digital realities.
Users can inspect their view history from either reality, and generate interventions that can be interoperably rendered cross-reality in real-time.
arXiv Detail & Related papers (2022-11-15T09:31:52Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Robust Egocentric Photo-realistic Facial Expression Transfer for Virtual
Reality [68.18446501943585]
Social presence will fuel the next generation of communication systems driven by digital humans in virtual reality (VR)
The best 3D video-realistic VR avatars that minimize the uncanny effect rely on person-specific (PS) models.
This paper makes progress in overcoming these limitations by proposing an end-to-end multi-identity architecture.
arXiv Detail & Related papers (2021-04-10T15:48:53Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.