Comparing Controller With the Hand Gestures Pinch and Grab for Picking
Up and Placing Virtual Objects
- URL: http://arxiv.org/abs/2202.10964v1
- Date: Tue, 22 Feb 2022 15:12:06 GMT
- Title: Comparing Controller With the Hand Gestures Pinch and Grab for Picking
Up and Placing Virtual Objects
- Authors: Alexander Sch\"afer, Gerd Reis, Didier Stricker
- Abstract summary: Modern applications usually use a simple pinch gesture for grabbing and moving objects.
It can be an unnatural gesture to pick up objects and prevents the implementation of other gestures.
Different implementations for grabbing and placing virtual objects are proposed and compared.
- Score: 81.5101473684021
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Grabbing virtual objects is one of the essential tasks for Augmented,
Virtual, and Mixed Reality applications. Modern applications usually use a
simple pinch gesture for grabbing and moving objects. However, picking up
objects by pinching has disadvantages. It can be an unnatural gesture to pick
up objects and prevents the implementation of other gestures which would be
performed with thumb and index. Therefore it is not the optimal choice for many
applications. In this work, different implementations for grabbing and placing
virtual objects are proposed and compared. Performance and accuracy of the
proposed techniques are measured and compared.
Related papers
- AnyRotate: Gravity-Invariant In-Hand Object Rotation with Sim-to-Real Touch [9.606323817785114]
We present AnyRotate, a system for gravity-invariant multi-axis in-hand object rotation using dense featured sim-to-real touch.
Our formulation allows the training of a unified policy to rotate unseen objects about arbitrary rotation axes in any hand direction.
Rich multi-fingered tactile sensing can detect unstable grasps and provide a reactive behavior that improves the robustness of the policy.
arXiv Detail & Related papers (2024-05-12T22:51:35Z) - DragAnything: Motion Control for Anything using Entity Representation [32.2017791506088]
DragAnything achieves motion control for any object in controllable video generation.
Our method surpasses the previous methods (e.g., DragNUWA) by 26% in human voting.
arXiv Detail & Related papers (2024-03-12T08:57:29Z) - Neural feels with neural fields: Visuo-tactile perception for in-hand
manipulation [57.60490773016364]
We combine vision and touch sensing on a multi-fingered hand to estimate an object's pose and shape during in-hand manipulation.
Our method, NeuralFeels, encodes object geometry by learning a neural field online and jointly tracks it by optimizing a pose graph problem.
Our results demonstrate that touch, at the very least, refines and, at the very best, disambiguates visual estimates during in-hand manipulation.
arXiv Detail & Related papers (2023-12-20T22:36:37Z) - ROAM: Robust and Object-Aware Motion Generation Using Neural Pose
Descriptors [73.26004792375556]
This paper shows that robustness and generalisation to novel scene objects in 3D object-aware character synthesis can be achieved by training a motion model with as few as one reference object.
We leverage an implicit feature representation trained on object-only datasets, which encodes an SE(3)-equivariant descriptor field around the object.
We demonstrate substantial improvements in 3D virtual character motion and interaction quality and robustness to scenarios with unseen objects.
arXiv Detail & Related papers (2023-08-24T17:59:51Z) - InstMove: Instance Motion for Object-centric Video Segmentation [70.16915119724757]
In this work, we study the instance-level motion and present InstMove, which stands for Instance Motion for Object-centric Video.
In comparison to pixel-wise motion, InstMove mainly relies on instance-level motion information that is free from image feature embeddings.
With only a few lines of code, InstMove can be integrated into current SOTA methods for three different video segmentation tasks.
arXiv Detail & Related papers (2023-03-14T17:58:44Z) - Unsupervised Multi-object Segmentation by Predicting Probable Motion
Patterns [92.80981308407098]
We propose a new approach to learn to segment multiple image objects without manual supervision.
The method can extract objects form still images, but uses videos for supervision.
We show state-of-the-art unsupervised object segmentation performance on simulated and real-world benchmarks.
arXiv Detail & Related papers (2022-10-21T17:57:05Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Design and Control of Roller Grasper V2 for In-Hand Manipulation [6.064252790182275]
We present a novel non-anthropomorphic robot grasper with the ability to manipulate objects by means of active surfaces at the fingertips.
Active surfaces are achieved by spherical rolling fingertips with two degrees of freedom (DoF)
A further DoF is in the base of each finger, allowing the fingers to grasp objects over a range of size and shapes.
arXiv Detail & Related papers (2020-04-18T00:54:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.