DeFINE: Delayed Feedback based Immersive Navigation Environment for
Studying Goal-Directed Human Navigation
- URL: http://arxiv.org/abs/2003.03133v2
- Date: Mon, 15 Feb 2021 09:03:41 GMT
- Title: DeFINE: Delayed Feedback based Immersive Navigation Environment for
Studying Goal-Directed Human Navigation
- Authors: Kshitij Tiwari, Ville Kyrki, Allen Cheung, Naohide Yamamoto
- Abstract summary: Delayed Feedback based Immersive Navigation Environment (DeFINE) is a framework that allows for easy creation and administration of navigation tasks.
DeFINE has a built-in capability to provide performance feedback to participants during an experiment.
- Score: 10.7197371210731
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the advent of consumer-grade products for presenting an immersive
virtual environment (VE), there is a growing interest in utilizing VEs for
testing human navigation behavior. However, preparing a VE still requires a
high level of technical expertise in computer graphics and virtual reality,
posing a significant hurdle to embracing the emerging technology. To address
this issue, this paper presents Delayed Feedback based Immersive Navigation
Environment (DeFINE), a framework that allows for easy creation and
administration of navigation tasks within customizable VEs via intuitive
graphical user interfaces and simple settings files. Importantly, DeFINE has a
built-in capability to provide performance feedback to participants during an
experiment, a feature that is critically missing in other similar frameworks.
To show the usability of DeFINE from both experimentalists' and participants'
perspectives, a demonstration was made in which participants navigated to a
hidden goal location with feedback that differentially weighted speed and
accuracy of their responses. In addition, the participants evaluated DeFINE in
terms of its ease of use, required workload, and proneness to induce
cybersickness. The demonstration exemplified typical experimental manipulations
DeFINE accommodates and what types of data it can collect for characterizing
participants' task performance. With its out-of-the-box functionality and
potential customizability due to open-source licensing, DeFINE makes VEs more
accessible to many researchers.
Related papers
- DRISHTI: Visual Navigation Assistant for Visually Impaired [0.0]
Blind and visually impaired (BVI) people face challenges because they need manual support to prompt information about their environment.
In this work, we took our first step towards developing an affordable and high-performing eye wearable assistive device, DRISHTI.
arXiv Detail & Related papers (2023-03-13T20:10:44Z) - ESC: Exploration with Soft Commonsense Constraints for Zero-shot Object
Navigation [75.13546386761153]
We present a novel zero-shot object navigation method, Exploration with Soft Commonsense constraints (ESC)
ESC transfers commonsense knowledge in pre-trained models to open-world object navigation without any navigation experience.
Experiments on MP3D, HM3D, and RoboTHOR benchmarks show that our ESC method improves significantly over baselines.
arXiv Detail & Related papers (2023-01-30T18:37:32Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - The Gesture Authoring Space: Authoring Customised Hand Gestures for
Grasping Virtual Objects in Immersive Virtual Environments [81.5101473684021]
This work proposes a hand gesture authoring tool for object specific grab gestures allowing virtual objects to be grabbed as in the real world.
The presented solution uses template matching for gesture recognition and requires no technical knowledge to design and create custom tailored hand gestures.
The study showed that gestures created with the proposed approach are perceived by users as a more natural input modality than the others.
arXiv Detail & Related papers (2022-07-03T18:33:33Z) - Let's Go to the Alien Zoo: Introducing an Experimental Framework to
Study Usability of Counterfactual Explanations for Machine Learning [6.883906273999368]
Counterfactual explanations (CFEs) have gained traction as a psychologically grounded approach to generate post-hoc explanations.
We introduce the Alien Zoo, an engaging, web-based and game-inspired experimental framework.
As a proof of concept, we demonstrate the practical efficacy and feasibility of this approach in a user study.
arXiv Detail & Related papers (2022-05-06T17:57:05Z) - Image-based Navigation in Real-World Environments via Multiple Mid-level
Representations: Fusion Models, Benchmark and Efficient Evaluation [13.207579081178716]
In recent learning-based navigation approaches, the scene understanding and navigation abilities of the agent are achieved simultaneously.
Unfortunately, even if simulators represent an efficient tool to train navigation policies, the resulting models often fail when transferred into the real world.
One possible solution is to provide the navigation model with mid-level visual representations containing important domain-invariant properties of the scene.
arXiv Detail & Related papers (2022-02-02T15:00:44Z) - Game and Simulation Design for Studying Pedestrian-Automated Vehicle
Interactions [1.3764085113103217]
We first present contemporary tools in the field and then propose the design and development of a new application that facilitates pedestrian point of view research.
We conduct a three-step user experience experiment where participants answer questions before and after using the application in various scenarios.
arXiv Detail & Related papers (2021-09-30T15:26:18Z) - Augmented reality navigation system for visual prosthesis [67.09251544230744]
We propose an augmented reality navigation system for visual prosthesis that incorporates a software of reactive navigation and path planning.
It consists on four steps: locating the subject on a map, planning the subject trajectory, showing it to the subject and re-planning without obstacles.
Results show how our augmented navigation system help navigation performance by reducing the time and distance to reach the goals, even significantly reducing the number of obstacles collisions.
arXiv Detail & Related papers (2021-09-30T09:41:40Z) - Diagnosing Vision-and-Language Navigation: What Really Matters [61.72935815656582]
Vision-and-language navigation (VLN) is a multimodal task where an agent follows natural language instructions and navigates in visual environments.
Recent studies witness a slow-down in the performance improvements in both indoor and outdoor VLN tasks.
In this work, we conduct a series of diagnostic experiments to unveil agents' focus during navigation.
arXiv Detail & Related papers (2021-03-30T17:59:07Z) - Building Trust in Autonomous Vehicles: Role of Virtual Reality Driving
Simulators in HMI Design [8.39368916644651]
We propose a methodology to validate the user experience in AVs based on continuous, objective information gathered from physiological signals.
We applied this methodology to the design of a head-up display interface delivering visual cues about the vehicle's sensory and planning systems.
arXiv Detail & Related papers (2020-07-27T08:42:07Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.