Investigating Input Modality and Task Geometry on Precision-first 3D
Drawing in Virtual Reality
- URL: http://arxiv.org/abs/2210.12270v1
- Date: Fri, 21 Oct 2022 21:56:43 GMT
- Title: Investigating Input Modality and Task Geometry on Precision-first 3D
Drawing in Virtual Reality
- Authors: Chen Chen, Matin Yarmand, Zhuoqun Xu, Varun Singh, Yang Zhang, Nadir
Weibel
- Abstract summary: We investigated how task geometric shapes and input modalities affect precision-first drawing performance.
We found that compared to using bare hands, VR controllers and pens yield nearly 30% of precision gain.
- Score: 16.795850221628033
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Accurately drawing non-planar 3D curves in immersive Virtual Reality (VR) is
indispensable for many precise 3D tasks. However, due to lack of physical
support, limited depth perception, and the non-planar nature of 3D curves, it
is challenging to adjust mid-air strokes to achieve high precision. Instead of
creating new interaction techniques, we investigated how task geometric shapes
and input modalities affect precision-first drawing performance in a
within-subject study (n = 12) focusing on 3D target tracing in commercially
available VR headsets. We found that compared to using bare hands, VR
controllers and pens yield nearly 30% of precision gain, and that the tasks
with large curvature, forward-backward or left-right orientations perform best.
We finally discuss opportunities for designing novel interaction techniques for
precise 3D drawing. We believe that our work will benefit future research
aiming to create usable toolboxes for precise 3D drawing.
Related papers
- Diff3DS: Generating View-Consistent 3D Sketch via Differentiable Curve Rendering [17.918603435615335]
3D sketches are widely used for visually representing the 3D shape and structure of objects or scenes.
We propose Diff3DS, a novel differentiable framework for generating view-consistent 3D sketch.
Our framework bridges the domains of 3D sketch and customized image, achieving end-toend optimization of 3D sketch.
arXiv Detail & Related papers (2024-05-24T07:48:14Z) - PonderV2: Pave the Way for 3D Foundation Model with A Universal
Pre-training Paradigm [114.47216525866435]
We introduce a novel universal 3D pre-training framework designed to facilitate the acquisition of efficient 3D representation.
For the first time, PonderV2 achieves state-of-the-art performance on 11 indoor and outdoor benchmarks, implying its effectiveness.
arXiv Detail & Related papers (2023-10-12T17:59:57Z) - Act3D: 3D Feature Field Transformers for Multi-Task Robotic Manipulation [18.964403296437027]
Act3D represents the robot's workspace using a 3D feature field with adaptive resolutions dependent on the task at hand.
It samples 3D point grids in a coarse to fine manner, featurizes them using relative-position attention, and selects where to focus the next round of point sampling.
arXiv Detail & Related papers (2023-06-30T17:34:06Z) - SepicNet: Sharp Edges Recovery by Parametric Inference of Curves in 3D
Shapes [16.355677959323426]
We introduce SepicNet, a novel deep network for the detection and parametrization of sharp edges in 3D shapes as primitive curves.
We develop an adaptive point cloud sampling technique that captures the sharp features better than uniform sampling.
arXiv Detail & Related papers (2023-04-13T13:37:21Z) - VRContour: Bringing Contour Delineations of Medical Structures Into
Virtual Reality [16.726748230138696]
Contouring is an indispensable step in Radiotherapy (RT) treatment planning.
Today's contouring software is constrained to only work with a 2D display, which is less intuitive and requires high task loads.
We present VRContour and investigate how to effectively bring contouring for radiation oncology into VR.
arXiv Detail & Related papers (2022-10-21T23:22:21Z) - Towards 3D VR-Sketch to 3D Shape Retrieval [128.47604316459905]
We study the use of 3D sketches as an input modality and advocate a VR-scenario where retrieval is conducted.
As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions.
arXiv Detail & Related papers (2022-09-20T22:04:31Z) - Fine-Grained VR Sketching: Dataset and Insights [140.0579567561475]
We present the first fine-grained dataset of 1,497 3D VR sketch and 3D shape pairs of a chair category with large shapes diversity.
Our dataset supports the recent trend in the sketch community on fine-grained data analysis.
arXiv Detail & Related papers (2022-09-20T21:30:54Z) - Structure-Aware 3D VR Sketch to 3D Shape Retrieval [113.20120789493217]
We focus on the challenge caused by inherent inaccuracies in 3D VR sketches.
We propose to use a triplet loss with an adaptive margin value driven by a "fitting gap"
We introduce a dataset of 202 VR sketches for 202 3D shapes drawn from memory rather than from observation.
arXiv Detail & Related papers (2022-09-19T14:29:26Z) - TANDEM3D: Active Tactile Exploration for 3D Object Recognition [16.548376556543015]
We propose TANDEM3D, a method that applies a co-training framework for 3D object recognition with tactile signals.
TANDEM3D is based on a novel encoder that builds 3D object representation from contact positions and normals using PointNet++.
Our method is trained entirely in simulation and validated with real-world experiments.
arXiv Detail & Related papers (2022-09-19T05:54:26Z) - Can We Solve 3D Vision Tasks Starting from A 2D Vision Transformer? [111.11502241431286]
Vision Transformers (ViTs) have proven to be effective in solving 2D image understanding tasks.
ViTs for 2D and 3D tasks have so far adopted vastly different architecture designs that are hardly transferable.
This paper demonstrates the appealing promise to understand the 3D visual world, using a standard 2D ViT architecture.
arXiv Detail & Related papers (2022-09-15T03:34:58Z) - 3D-to-2D Distillation for Indoor Scene Parsing [78.36781565047656]
We present a new approach that enables us to leverage 3D features extracted from large-scale 3D data repository to enhance 2D features extracted from RGB images.
First, we distill 3D knowledge from a pretrained 3D network to supervise a 2D network to learn simulated 3D features from 2D features during the training.
Second, we design a two-stage dimension normalization scheme to calibrate the 2D and 3D features for better integration.
Third, we design a semantic-aware adversarial training model to extend our framework for training with unpaired 3D data.
arXiv Detail & Related papers (2021-04-06T02:22:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.