3D Gaze Vis: Sharing Eye Tracking Data Visualization for Collaborative
Work in VR Environment
- URL: http://arxiv.org/abs/2303.10635v1
- Date: Sun, 19 Mar 2023 12:00:53 GMT
- Title: 3D Gaze Vis: Sharing Eye Tracking Data Visualization for Collaborative
Work in VR Environment
- Authors: Song Zhao, Shiwei Cheng, Chenshuang Zhu
- Abstract summary: We designed three different eye tracking data visualizations: gaze cursor, gaze spotlight and gaze trajectory in VR scene for a course of human heart.
We found that gaze cursor from doctors could help students learn complex 3D heart models more effectively.
It indicated that sharing eye tracking data visualization could improve the quality and efficiency of collaborative work in the VR environment.
- Score: 3.3130410344903325
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Conducting collaborative tasks, e.g., multi-user game, in virtual reality
(VR) could enable us to explore more immersive and effective experience.
However, for current VR systems, users cannot communicate properly with each
other via their gaze points, and this would interfere with users' mutual
understanding of the intention. In this study, we aimed to find the optimal eye
tracking data visualization , which minimized the cognitive interference and
improved the understanding of the visual attention and intention between users.
We designed three different eye tracking data visualizations: gaze cursor, gaze
spotlight and gaze trajectory in VR scene for a course of human heart , and
found that gaze cursor from doctors could help students learn complex 3D heart
models more effectively. To further explore, two students as a pair were asked
to finish a quiz in VR environment, with sharing gaze cursors with each other,
and obtained more efficiency and scores. It indicated that sharing eye tracking
data visualization could improve the quality and efficiency of collaborative
work in the VR environment.
Related papers
- Exploring Eye Tracking to Detect Cognitive Load in Complex Virtual Reality Training [11.83314968015781]
We present an ongoing study to detect users' cognitive load using an eye-tracking-based machine learning approach.
We developed a VR training system for cold spray and tested it with 22 participants.
Preliminary analysis demonstrates the feasibility of using eye-tracking to detect cognitive load in complex VR experiences.
arXiv Detail & Related papers (2024-11-18T16:44:19Z) - The Trail Making Test in Virtual Reality (TMT-VR): The Effects of Interaction Modes and Gaming Skills on Cognitive Performance of Young Adults [0.7916635054977068]
This study developed and evaluated the Trail Making Test in VR (TMT-VR)
It investigated the effects of different interaction modes and gaming skills on cognitive performance.
arXiv Detail & Related papers (2024-10-30T22:06:14Z) - Learning High-Quality Navigation and Zooming on Omnidirectional Images in Virtual Reality [37.564863636844905]
We present a novel system, called OmniVR, designed to enhance visual clarity during VR navigation.
Our system enables users to effortlessly locate and zoom in on the objects of interest in VR.
arXiv Detail & Related papers (2024-05-01T07:08:24Z) - Human-oriented Representation Learning for Robotic Manipulation [64.59499047836637]
Humans inherently possess generalizable visual representations that empower them to efficiently explore and interact with the environments in manipulation tasks.
We formalize this idea through the lens of human-oriented multi-task fine-tuning on top of pre-trained visual encoders.
Our Task Fusion Decoder consistently improves the representation of three state-of-the-art visual encoders for downstream manipulation policy-learning.
arXiv Detail & Related papers (2023-10-04T17:59:38Z) - What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? [48.75469525877328]
We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks.
We can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav.
arXiv Detail & Related papers (2023-10-03T17:27:10Z) - Eye-tracked Virtual Reality: A Comprehensive Survey on Methods and
Privacy Challenges [33.50215933003216]
This survey focuses on eye tracking in virtual reality (VR) and the privacy implications of those possibilities.
We first cover major works in eye tracking, VR, and privacy areas between the years 2012 and 2022.
We focus on eye-based authentication as well as computational methods to preserve the privacy of individuals and their eye-tracking data in VR.
arXiv Detail & Related papers (2023-05-23T14:02:38Z) - VRContour: Bringing Contour Delineations of Medical Structures Into
Virtual Reality [16.726748230138696]
Contouring is an indispensable step in Radiotherapy (RT) treatment planning.
Today's contouring software is constrained to only work with a 2D display, which is less intuitive and requires high task loads.
We present VRContour and investigate how to effectively bring contouring for radiation oncology into VR.
arXiv Detail & Related papers (2022-10-21T23:22:21Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - What Can You Learn from Your Muscles? Learning Visual Representation
from Human Interactions [50.435861435121915]
We use human interaction and attention cues to investigate whether we can learn better representations compared to visual-only representations.
Our experiments show that our "muscly-supervised" representation outperforms a visual-only state-of-the-art method MoCo.
arXiv Detail & Related papers (2020-10-16T17:46:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.