Action-Specific Perception & Performance on a Fitts's Law Task in
Virtual Reality: The Role of Haptic Feedback
- URL: http://arxiv.org/abs/2207.07400v2
- Date: Mon, 18 Jul 2022 06:47:45 GMT
- Title: Action-Specific Perception & Performance on a Fitts's Law Task in
Virtual Reality: The Role of Haptic Feedback
- Authors: Panagiotis Kourtesis, Sebastian Vizcay, Maud Marchal, Claudio
Pacchierotti, Ferran Argelaguet
- Abstract summary: Action-Specific Perception (ASP) theory postulates that the performance of an individual on a task modulates this individual's spatial & time perception pertinent to the task's components & procedures.
This paper examines the association between performance & perception & the potential effects that tactile feedback modalities could generate.
- Score: 8.993666948179644
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: While user's perception & performance are predominantly examined
independently in virtual reality, the Action-Specific Perception (ASP) theory
postulates that the performance of an individual on a task modulates this
individual's spatial & time perception pertinent to the task's components &
procedures. This paper examines the association between performance &
perception & the potential effects that tactile feedback modalities could
generate. This paper reports a user study (N=24), in which participants
performed a Fitts's law target acquisition task by using three feedback
modalities: visual, visuo-electrotactile, & visuo-vibrotactile. The users
completed 3 Target Sizes X 2 Distances X 3 feedback modalities = 18 trials. The
size perception, distance perception, & (movement) time perception were
assessed at the end of each trial. Performance-wise, the results showed that
electrotactile feedback facilitates a significantly better accuracy compared to
vibrotactile & visual feedback, while vibrotactile provided the worst accuracy.
Electrotactile & visual feedback enabled a comparable reaction time, while the
vibrotactile offered a substantially slower reaction time than visual feedback.
Although amongst feedback types the pattern of differences in perceptual
aspects were comparable to performance differences, none of them was
statistically significant. However, performance indeed modulated perception.
Significant action-specific effects on spatial & time perception were detected.
Changes in accuracy modulate both size perception & time perception, while
changes in movement speed modulate distance perception. Also, the index of
difficulty was found to modulate perception. These outcomes highlighted the
importance of haptic feedback on performance, & importantly the significance of
action-specific effects on spatial & time perception in VR, which should be
considered in future VR studies.
Related papers
- V-HOP: Visuo-Haptic 6D Object Pose Tracking [18.984396185797667]
Humans naturally integrate vision and haptics for robust object perception during manipulation.
Prior object pose estimation research has attempted to combine visual and haptic/tactile feedback.
We introduce a new visuo-haptic transformer-based object pose tracker that seamlessly integrates visual and haptic input.
arXiv Detail & Related papers (2025-02-24T18:59:50Z) - Influence of field of view in visual prostheses design: Analysis with a VR system [3.9998518782208783]
We evaluate the influence of field of view with respect to spatial resolution in visual prostheses.
Twenty-four normally sighted participants were asked to find and recognize usual objects.
Results show that the accuracy and response time decrease when the field of view is increased.
arXiv Detail & Related papers (2025-01-28T22:25:22Z) - The Trail Making Test in Virtual Reality (TMT-VR): The Effects of Interaction Modes and Gaming Skills on Cognitive Performance of Young Adults [0.7916635054977068]
This study developed and evaluated the Trail Making Test in VR (TMT-VR)
It investigated the effects of different interaction modes and gaming skills on cognitive performance.
arXiv Detail & Related papers (2024-10-30T22:06:14Z) - Instantaneous Perception of Moving Objects in 3D [86.38144604783207]
The perception of 3D motion of surrounding traffic participants is crucial for driving safety.
We propose to leverage local occupancy completion of object point clouds to densify the shape cue, and mitigate the impact of swimming artifacts.
Extensive experiments demonstrate superior performance compared to standard 3D motion estimation approaches.
arXiv Detail & Related papers (2024-05-05T01:07:24Z) - Self-Avatar Animation in Virtual Reality: Impact of Motion Signals Artifacts on the Full-Body Pose Reconstruction [13.422686350235615]
We aim to measure the impact on the reconstruction of the articulated self-avatar's full-body pose.
We analyze the motion reconstruction errors using ground truth and 3D Cartesian coordinates estimated from textitYOLOv8 pose estimation.
arXiv Detail & Related papers (2024-04-29T12:02:06Z) - RLPeri: Accelerating Visual Perimetry Test with Reinforcement Learning
and Convolutional Feature Extraction [8.88154717905851]
We present RLPeri, a reinforcement learning-based approach to optimize visual perimetry testing.
We aim to make visual perimetry testing more efficient and patient-friendly, while still providing accurate results.
arXiv Detail & Related papers (2024-03-08T07:19:43Z) - What Makes Pre-Trained Visual Representations Successful for Robust
Manipulation? [57.92924256181857]
We find that visual representations designed for manipulation and control tasks do not necessarily generalize under subtle changes in lighting and scene texture.
We find that emergent segmentation ability is a strong predictor of out-of-distribution generalization among ViT models.
arXiv Detail & Related papers (2023-11-03T18:09:08Z) - User Training with Error Augmentation for Electromyogram-based Gesture Classification [4.203816772270161]
We designed and tested a system for real-time control of a user interface by extracting surface electromyographic (sEMG) activity from eight electrodes in a wrist-band configuration.
sEMG data were streamed into a machine-learning algorithm that classified hand gestures in real-time.
arXiv Detail & Related papers (2023-09-13T20:15:25Z) - BigSmall: Efficient Multi-Task Learning for Disparate Spatial and
Temporal Physiological Measurements [28.573472322978507]
We present BigSmall, an efficient architecture for physiological and behavioral measurement.
We propose a multi-branch network with wrapping temporal shift modules that yields both accuracy and efficiency gains.
arXiv Detail & Related papers (2023-03-21T03:41:57Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Visualizing and Understanding Patch Interactions in Vision Transformer [96.70401478061076]
Vision Transformer (ViT) has become a leading tool in various computer vision tasks.
We propose a novel explainable visualization approach to analyze and interpret the crucial attention interactions among patches for vision transformer.
arXiv Detail & Related papers (2022-03-11T13:48:11Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z) - Assisted Perception: Optimizing Observations to Communicate State [112.40598205054994]
We aim to help users estimate the state of the world in tasks like robotic teleoperation and navigation with visual impairments.
We synthesize new observations that lead to more accurate internal state estimates when processed by the user.
arXiv Detail & Related papers (2020-08-06T19:08:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.