HOOV: Hand Out-Of-View Tracking for Proprioceptive Interaction using
Inertial Sensing
- URL: http://arxiv.org/abs/2303.07016v2
- Date: Sun, 30 Apr 2023 09:19:24 GMT
- Title: HOOV: Hand Out-Of-View Tracking for Proprioceptive Interaction using
Inertial Sensing
- Authors: Paul Streli, Rayan Armani, Yi Fei Cheng and Christian Holz
- Abstract summary: We present HOOV, a wrist-worn sensing method that allows VR users to interact with objects outside their field of view.
Based on the signals of a single wrist-worn inertial sensor, HOOV continuously estimates the user's hand position in 3-space.
Our novel data-driven method predicts hand positions and trajectories from just the continuous estimation of hand orientation.
- Score: 25.34222794274071
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current Virtual Reality systems are designed for interaction under visual
control. Using built-in cameras, headsets track the user's hands or hand-held
controllers while they are inside the field of view. Current systems thus
ignore the user's interaction with off-screen content -- virtual objects that
the user could quickly access through proprioception without requiring
laborious head motions to bring them into focus. In this paper, we present
HOOV, a wrist-worn sensing method that allows VR users to interact with objects
outside their field of view. Based on the signals of a single wrist-worn
inertial sensor, HOOV continuously estimates the user's hand position in
3-space to complement the headset's tracking as the hands leave the tracking
range. Our novel data-driven method predicts hand positions and trajectories
from just the continuous estimation of hand orientation, which by itself is
stable based solely on inertial observations. Our inertial sensing
simultaneously detects finger pinching to register off-screen selection events,
confirms them using a haptic actuator inside our wrist device, and thus allows
users to select, grab, and drop virtual content. We compared HOOV's performance
with a camera-based optical motion capture system in two folds. In the first
evaluation, participants interacted based on tracking information from the
motion capture system to assess the accuracy of their proprioceptive input,
whereas in the second, they interacted based on HOOV's real-time estimations.
We found that HOOV's target-agnostic estimations had a mean tracking error of
7.7 cm, which allowed participants to reliably access virtual objects around
their body without first bringing them into focus. We demonstrate several
applications that leverage the larger input space HOOV opens up for quick
proprioceptive interaction, and conclude by discussing the potential of our
technique.
Related papers
- Tremor Reduction for Accessible Ray Based Interaction in VR Applications [0.0]
Many traditional 2D interface interaction methods have been directly converted to work in a VR space with little alteration to the input mechanism.
In this paper we propose the use of a low pass filter, to normalize user input noise, alleviating fine motor requirements during ray-based interaction.
arXiv Detail & Related papers (2024-05-12T17:07:16Z) - Benchmarks and Challenges in Pose Estimation for Egocentric Hand Interactions with Objects [89.95728475983263]
holistic 3Dunderstanding of such interactions from egocentric views is important for tasks in robotics, AR/VR, action recognition and motion generation.
We design the HANDS23 challenge based on the AssemblyHands and ARCTIC datasets with carefully designed training and testing splits.
Based on the results of the top submitted methods and more recent baselines on the leaderboards, we perform a thorough analysis on 3D hand(-object) reconstruction tasks.
arXiv Detail & Related papers (2024-03-25T05:12:21Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - AvatarPoser: Articulated Full-Body Pose Tracking from Sparse Motion
Sensing [24.053096294334694]
We present AvatarPoser, the first learning-based method that predicts full-body poses in world coordinates using only motion input from the user's head and hands.
Our method builds on a Transformer encoder to extract deep features from the input signals and decouples global motion from the learned local joint orientations.
In our evaluation, AvatarPoser achieved new state-of-the-art results in evaluations on large motion capture datasets.
arXiv Detail & Related papers (2022-07-27T20:52:39Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - UNOC: Understanding Occlusion for Embodied Presence in Virtual Reality [12.349749717823736]
In this paper, we propose a new data-driven framework for inside-out body tracking.
We first collect a large-scale motion capture dataset with both body and finger motions.
We then simulate the occlusion patterns in head-mounted camera views on the captured ground truth using a ray casting algorithm and learn a deep neural network to infer the occluded body parts.
arXiv Detail & Related papers (2020-11-12T09:31:09Z) - Physics-Based Dexterous Manipulations with Estimated Hand Poses and
Residual Reinforcement Learning [52.37106940303246]
We learn a model that maps noisy input hand poses to target virtual poses.
The agent is trained in a residual setting by using a model-free hybrid RL+IL approach.
We test our framework in two applications that use hand pose estimates for dexterous manipulations: hand-object interactions in VR and hand-object motion reconstruction in-the-wild.
arXiv Detail & Related papers (2020-08-07T17:34:28Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z) - When We First Met: Visual-Inertial Person Localization for Co-Robot
Rendezvous [29.922954461039698]
We propose a method to learn a visual-inertial feature space in which the motion of a person in video can be easily matched to the motion measured by a wearable inertial measurement unit (IMU)
Our proposed method is able to accurately localize a target person with 80.7% accuracy using only 5 seconds of IMU data and video.
arXiv Detail & Related papers (2020-06-17T16:15:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.