VRContour: Bringing Contour Delineations of Medical Structures Into
Virtual Reality
- URL: http://arxiv.org/abs/2210.12298v2
- Date: Tue, 8 Nov 2022 04:47:20 GMT
- Title: VRContour: Bringing Contour Delineations of Medical Structures Into
Virtual Reality
- Authors: Chen Chen, Matin Yarmand, Varun Singh, Michael V. Sherer, James D.
Murphy, Yang Zhang, Nadir Weibel
- Abstract summary: Contouring is an indispensable step in Radiotherapy (RT) treatment planning.
Today's contouring software is constrained to only work with a 2D display, which is less intuitive and requires high task loads.
We present VRContour and investigate how to effectively bring contouring for radiation oncology into VR.
- Score: 16.726748230138696
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Contouring is an indispensable step in Radiotherapy (RT) treatment planning.
However, today's contouring software is constrained to only work with a 2D
display, which is less intuitive and requires high task loads. Virtual Reality
(VR) has shown great potential in various specialties of healthcare and health
sciences education due to the unique advantages of intuitive and natural
interactions in immersive spaces. VR-based radiation oncology integration has
also been advocated as a target healthcare application, allowing providers to
directly interact with 3D medical structures. We present VRContour and
investigate how to effectively bring contouring for radiation oncology into VR.
Through an autobiographical iterative design, we defined three design spaces
focused on contouring in VR with the support of a tracked tablet and VR stylus,
and investigating dimensionality for information consumption and input (either
2D or 2D + 3D). Through a within-subject study (n = 8), we found that
visualizations of 3D medical structures significantly increase precision, and
reduce mental load, frustration, as well as overall contouring effort.
Participants also agreed with the benefits of using such metaphors for learning
purposes.
Related papers
- Advanced XR-Based 6-DOF Catheter Tracking System for Immersive Cardiac Intervention Training [37.69303106863453]
This paper presents a novel system for real-time 3D tracking and visualization of intracardiac echocardiography (ICE) catheters.
A custom 3D-printed setup captures biplane video of the catheter, while a specialized computer vision algorithm reconstructs its 3D trajectory.
The system's data is integrated into an interactive Unity-based environment, rendered through the Meta Quest 3 XR headset.
arXiv Detail & Related papers (2024-11-04T21:05:40Z) - SUGAR: Pre-training 3D Visual Representations for Robotics [85.55534363501131]
We introduce a novel 3D pre-training framework for robotics named SUGAR.
SUGAR captures semantic, geometric and affordance properties of objects through 3D point clouds.
We show that SUGAR's 3D representation outperforms state-of-the-art 2D and 3D representations.
arXiv Detail & Related papers (2024-04-01T21:23:03Z) - Volumetric Environment Representation for Vision-Language Navigation [66.04379819772764]
Vision-language navigation (VLN) requires an agent to navigate through a 3D environment based on visual observations and natural language instructions.
We introduce a Volumetric Environment Representation (VER), which voxelizes the physical world into structured 3D cells.
VER predicts 3D occupancy, 3D room layout, and 3D bounding boxes jointly.
arXiv Detail & Related papers (2024-03-21T06:14:46Z) - VR-GS: A Physical Dynamics-Aware Interactive Gaussian Splatting System in Virtual Reality [39.53150683721031]
Our proposed VR-GS system represents a leap forward in human-centered 3D content interaction.
The components of our Virtual Reality system are designed for high efficiency and effectiveness.
arXiv Detail & Related papers (2024-01-30T01:28:36Z) - Multisensory extended reality applications offer benefits for volumetric biomedical image analysis in research and medicine [2.46537907738351]
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine.
Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices.
In this study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications.
arXiv Detail & Related papers (2023-11-07T13:37:47Z) - Investigating Input Modality and Task Geometry on Precision-first 3D
Drawing in Virtual Reality [16.795850221628033]
We investigated how task geometric shapes and input modalities affect precision-first drawing performance.
We found that compared to using bare hands, VR controllers and pens yield nearly 30% of precision gain.
arXiv Detail & Related papers (2022-10-21T21:56:43Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Towards 3D VR-Sketch to 3D Shape Retrieval [128.47604316459905]
We study the use of 3D sketches as an input modality and advocate a VR-scenario where retrieval is conducted.
As a first stab at this new 3D VR-sketch to 3D shape retrieval problem, we make four contributions.
arXiv Detail & Related papers (2022-09-20T22:04:31Z) - Structure-Aware 3D VR Sketch to 3D Shape Retrieval [113.20120789493217]
We focus on the challenge caused by inherent inaccuracies in 3D VR sketches.
We propose to use a triplet loss with an adaptive margin value driven by a "fitting gap"
We introduce a dataset of 202 VR sketches for 202 3D shapes drawn from memory rather than from observation.
arXiv Detail & Related papers (2022-09-19T14:29:26Z) - Pixel Codec Avatars [99.36561532588831]
Pixel Codec Avatars (PiCA) is a deep generative model of 3D human faces.
On a single Oculus Quest 2 mobile VR headset, 5 avatars are rendered in realtime in the same scene.
arXiv Detail & Related papers (2021-04-09T23:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.