The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments
- URL: http://arxiv.org/abs/2207.01092v1
- Date: Sun, 3 Jul 2022 18:33:33 GMT
- Title: The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments
- Authors: Alexander Sch\"afer, Gerd Reis, Didier Stricker
- Abstract summary: This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
- Score: 81.5101473684021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Natural user interfaces are on the rise. Manufacturers for Augmented,
Virtual, and Mixed Reality head mounted displays are increasingly integrating
new sensors into their consumer grade products, allowing gesture recognition
without additional hardware. This offers new possibilities for bare handed
interaction within virtual environments. This work proposes a hand gesture
authoring tool for object specific grab gestures allowing virtual objects to be
grabbed as in the real world. The presented solution uses template matching for
gesture recognition and requires no technical knowledge to design and create
custom tailored hand gestures. In a user study, the proposed approach is
compared with the pinch gesture and the controller for grasping virtual
objects. The different grasping techniques are compared in terms of accuracy,
task completion time, usability, and naturalness. The study showed that
gestures created with the proposed approach are perceived by users as a more
natural input modality than the others.
Related papers
- Systematic Adaptation of Communication-focused Machine Learning Models
from Real to Virtual Environments for Human-Robot Collaboration [1.392250707100996]
This paper presents a systematic framework for the real to virtual adaptation using limited size of virtual dataset.
Hand gestures recognition which has been a topic of much research and subsequent commercialization in the real world has been possible because of the creation of large, labelled datasets.
arXiv Detail & Related papers (2023-07-21T03:24:55Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Comparing Controller With the Hand Gestures Pinch and Grab for Picking
Up and Placing Virtual Objects [81.5101473684021]
Modern applications usually use a simple pinch gesture for grabbing and moving objects.
It can be an unnatural gesture to pick up objects and prevents the implementation of other gestures.
Different implementations for grabbing and placing virtual objects are proposed and compared.
arXiv Detail & Related papers (2022-02-22T15:12:06Z) - Evaluating Continual Learning Algorithms by Generating 3D Virtual
Environments [66.83839051693695]
Continual learning refers to the ability of humans and animals to incrementally learn over time in a given environment.
We propose to leverage recent advances in 3D virtual environments in order to approach the automatic generation of potentially life-long dynamic scenes with photo-realistic appearance.
A novel element of this paper is that scenes are described in a parametric way, thus allowing the user to fully control the visual complexity of the input stream the agent perceives.
arXiv Detail & Related papers (2021-09-16T10:37:21Z) - Dynamic Modeling of Hand-Object Interactions via Tactile Sensing [133.52375730875696]
In this work, we employ a high-resolution tactile glove to perform four different interactive activities on a diversified set of objects.
We build our model on a cross-modal learning framework and generate the labels using a visual processing pipeline to supervise the tactile model.
This work takes a step on dynamics modeling in hand-object interactions from dense tactile sensing.
arXiv Detail & Related papers (2021-09-09T16:04:14Z) - SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild [62.450907796261646]
Recognition of hand gestures can be performed directly from the stream of hand skeletons estimated by software.
Despite the recent advancements in gesture and action recognition from skeletons, it is unclear how well the current state-of-the-art techniques can perform in a real-world scenario.
This paper presents the results of the SHREC 2021: Track on Skeleton-based Hand Gesture Recognition in the Wild contest.
arXiv Detail & Related papers (2021-06-21T10:57:49Z) - A Deep Learning Framework for Recognizing both Static and Dynamic
Gestures [0.8602553195689513]
We propose a unified framework that recognizes both static and dynamic gestures, using simple RGB vision (without depth sensing)
We employ a pose-driven spatial attention strategy, which guides our proposed Static and Dynamic gestures Network - StaDNet.
In a number of experiments, we show that the proposed approach surpasses the state-of-the-art results on the large-scale Chalearn 2016 dataset.
arXiv Detail & Related papers (2020-06-11T10:39:02Z) - 3D dynamic hand gestures recognition using the Leap Motion sensor and
convolutional neural networks [0.0]
We present a method for the recognition of a set of non-static gestures acquired through the Leap Motion sensor.
The acquired gesture information is converted in color images, where the variation of hand joint positions during the gesture are projected on a plane.
The classification of the gestures is performed using a deep Convolutional Neural Network (CNN)
arXiv Detail & Related papers (2020-03-03T11:05:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.