ShareYourReality: Investigating Haptic Feedback and Agency in Virtual
Avatar Co-embodiment
- URL: http://arxiv.org/abs/2403.08363v1
- Date: Wed, 13 Mar 2024 09:23:53 GMT
- Title: ShareYourReality: Investigating Haptic Feedback and Agency in Virtual
Avatar Co-embodiment
- Authors: Karthikeya Puttur Venkatraj, Wo Meijer, Monica
Perusqu\'ia-Hern\'andez, Gijs Huisman and Abdallah El Ali
- Abstract summary: Virtual co-embodiment enables two users to share a single avatar in Virtual Reality (VR)
During such experiences, the illusion of shared motion control can break during joint-action activities.
We explore how haptics can enable non-verbal coordination between co-embodied participants.
- Score: 10.932344446402276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Virtual co-embodiment enables two users to share a single avatar in Virtual
Reality (VR). During such experiences, the illusion of shared motion control
can break during joint-action activities, highlighting the need for
position-aware feedback mechanisms. Drawing on the perceptual crossing
paradigm, we explore how haptics can enable non-verbal coordination between
co-embodied participants. In a within-subjects study (20 participant pairs), we
examined the effects of vibrotactile haptic feedback (None, Present) and avatar
control distribution (25-75%, 50-50%, 75-25%) across two VR reaching tasks
(Targeted, Free-choice) on participants Sense of Agency (SoA), co-presence,
body ownership, and motion synchrony. We found (a) lower SoA in the free-choice
with haptics than without, (b) higher SoA during the shared targeted task, (c)
co-presence and body ownership were significantly higher in the free-choice
task, (d) players hand motions synchronized more in the targeted task. We
provide cautionary considerations when including haptic feedback mechanisms for
avatar co-embodiment experiences.
Related papers
- POSE: Pose estimation Of virtual Sync Exhibit system [0.0]
The motivation is that we find it inconvenient to use joysticks and sensors when playing with fitness rings.
In order to replace joysticks and reduce costs, we developed a platform that can control virtual avatars through pose estimation to identify the movements of real people.
arXiv Detail & Related papers (2024-10-20T09:34:15Z) - Disentangled Interaction Representation for One-Stage Human-Object
Interaction Detection [70.96299509159981]
Human-Object Interaction (HOI) detection is a core task for human-centric image understanding.
Recent one-stage methods adopt a transformer decoder to collect image-wide cues that are useful for interaction prediction.
Traditional two-stage methods benefit significantly from their ability to compose interaction features in a disentangled and explainable manner.
arXiv Detail & Related papers (2023-12-04T08:02:59Z) - ReMoS: 3D Motion-Conditioned Reaction Synthesis for Two-Person Interactions [66.87211993793807]
We present ReMoS, a denoising diffusion based model that synthesizes full body motion of a person in two person interaction scenario.
We demonstrate ReMoS across challenging two person scenarios such as pair dancing, Ninjutsu, kickboxing, and acrobatics.
We also contribute the ReMoCap dataset for two person interactions containing full body and finger motions.
arXiv Detail & Related papers (2023-11-28T18:59:52Z) - Moving Avatars and Agents in Social Extended Reality Environments [16.094148092964264]
We introduce a Smart Avatar system that delivers continuous full-body human representations for noncontinuous locomotion in VR spaces.
We also introduce the concept of Stuttered Locomotion, which can be applied to any continuous locomotion method.
We will discuss the potential of Smart Avatars and Stuttered Locomotion for shared VR experiences.
arXiv Detail & Related papers (2023-06-26T07:51:17Z) - HOOV: Hand Out-Of-View Tracking for Proprioceptive Interaction using
Inertial Sensing [25.34222794274071]
We present HOOV, a wrist-worn sensing method that allows VR users to interact with objects outside their field of view.
Based on the signals of a single wrist-worn inertial sensor, HOOV continuously estimates the user's hand position in 3-space.
Our novel data-driven method predicts hand positions and trajectories from just the continuous estimation of hand orientation.
arXiv Detail & Related papers (2023-03-13T11:25:32Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - IMoS: Intent-Driven Full-Body Motion Synthesis for Human-Object
Interactions [69.95820880360345]
We present the first framework to synthesize the full-body motion of virtual human characters with 3D objects placed within their reach.
Our system takes as input textual instructions specifying the objects and the associated intentions of the virtual characters.
We show that our synthesized full-body motions appear more realistic to the participants in more than 80% of scenarios.
arXiv Detail & Related papers (2022-12-14T23:59:24Z) - QuestSim: Human Motion Tracking from Sparse Sensors with Simulated
Avatars [80.05743236282564]
Real-time tracking of human body motion is crucial for immersive experiences in AR/VR.
We present a reinforcement learning framework that takes in sparse signals from an HMD and two controllers.
We show that a single policy can be robust to diverse locomotion styles, different body sizes, and novel environments.
arXiv Detail & Related papers (2022-09-20T00:25:54Z) - AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion
Sensing [24.053096294334694]
We present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands.
Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations.
In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets.
arXiv Detail & Related papers (2022-07-27T20:52:39Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.