Analyzing Behavior and User Experience in Online Museum Virtual Tours
- URL: http://arxiv.org/abs/2310.11176v1
- Date: Tue, 17 Oct 2023 11:44:57 GMT
- Title: Analyzing Behavior and User Experience in Online Museum Virtual Tours
- Authors: Roman Shikhri, Lev Poretski, Joel Lanir
- Abstract summary: The disruption to tourism and travel caused by the COVID-related health crisis highlighted the potential of virtual tourism to provide a universally accessible way to engage in cultural experiences.
360-degree virtual tours, showing a realistic representation of the physical location, enable virtual tourists to experience cultural heritage sites and engage with their collections from the comfort and safety of their home.
There is no clear standard for the design of such tours and the experience of visitors may vary widely from platform to platform.
- Score: 14.323932342599452
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The disruption to tourism and travel caused by the COVID-related health
crisis highlighted the potential of virtual tourism to provide a universally
accessible way to engage in cultural experiences. 360-degree virtual tours,
showing a realistic representation of the physical location, enable virtual
tourists to experience cultural heritage sites and engage with their
collections from the comfort and safety of their home. However, there is no
clear standard for the design of such tours and the experience of visitors may
vary widely from platform to platform. We first conducted a comprehensive
analysis of 40 existing virtual tours, constructing a descriptive framework for
understanding the key components and characteristics of virtual tours. Next, we
conducted a remote usability study to gain deeper insights into the actual
experiences and challenges faced by VT users. Our investigation revealed a
significant disparity between users' mental models of virtual tours and the
actual system behavior. We discuss these issues and provide concrete
recommendations for the creation of better, user-friendly 360-degree virtual
tours.
Related papers
- Sight, Sound and Smell in Immersive Experiences of Urban History: Virtual Vauxhall Gardens Case Study [2.0897860130200443]
This research investigates how multisensory experiences involving olfaction can be effectively integrated into VR reconstructions of historical spaces.<n>In the context of a VR reconstruction of London's eighteenth-century Vauxhall Pleasure Gardens, we developed a networked portable olfactory display.<n>Our results show that integrating synchronized olfactory stimuli into the VR experience can enhance user engagement and be perceived positively.
arXiv Detail & Related papers (2025-05-19T18:00:42Z) - Exploring Context-aware and LLM-driven Locomotion for Immersive Virtual Reality [8.469329222500726]
We propose a novel locomotion technique powered by large language models (LLMs)
We evaluate three locomotion methods: controller-based teleportation, voice-based steering, and our language model-driven approach.
Our findings indicate that the LLM-driven locomotion possesses comparable usability, presence, and cybersickness scores to established methods.
arXiv Detail & Related papers (2025-04-24T07:48:09Z) - TraveLLaMA: Facilitating Multi-modal Large Language Models to Understand Urban Scenes and Provide Travel Assistance [48.12326709517022]
We present TraveLLaMA, a specialized multimodal language model designed for urban scene understanding and travel assistance.
Our work addresses the fundamental challenge of developing practical AI travel assistants through a novel large-scale dataset of 220k question-answer pairs.
arXiv Detail & Related papers (2025-04-23T08:32:25Z) - MetaDecorator: Generating Immersive Virtual Tours through Multimodality [4.952351885592028]
MetaDecorator is a framework that empowers users to personalize virtual spaces.
By leveraging text-driven prompts and image synthesis techniques, MetaDecorator adorns static panoramas captured by 360deg imaging devices, transforming them into uniquely styled and visually appealing environments.
arXiv Detail & Related papers (2025-01-27T15:59:58Z) - RoomTour3D: Geometry-Aware Video-Instruction Tuning for Embodied Navigation [87.8836203762073]
We introduce RoomTour3D, a video-instruction dataset derived from web-based room tour videos.
RoomTour3D generates open-ended human walking trajectories and open-world navigable instructions.
We demonstrate experimentally that RoomTour3D enables significant improvements across multiple Vision-and-Language Navigation tasks.
arXiv Detail & Related papers (2024-12-11T18:10:21Z) - INDIANA: Personalized Travel Recommendations Using Wearables and AI [0.0]
This work presents a personalized travel recommendation system developed as part of the INDIANA platform.
The system uses data from wearable devices, user preferences, current location, weather forecasts, and activity history to provide real-time, context-aware recommendations.
arXiv Detail & Related papers (2024-11-08T10:11:01Z) - V-IRL: Grounding Virtual Intelligence in Real Life [65.87750250364411]
V-IRL is a platform that enables agents to interact with the real world in a virtual yet realistic environment.
Our platform serves as a playground for developing agents that can accomplish various practical tasks.
arXiv Detail & Related papers (2024-02-05T18:59:36Z) - Perspectives from Naive Participants and Experienced Social Science
Researchers on Addressing Embodiment in a Virtual Cyberball Task [7.715638414214042]
We describe the design of an immersive virtual Cyberball task that included avatar customization, and user feedback on this design.
We conducted in-depth user testing and feedback sessions with 15 Cyberball stakeholders.
arXiv Detail & Related papers (2023-12-05T17:09:59Z) - What do we learn from a large-scale study of pre-trained visual representations in sim and real environments? [48.75469525877328]
We present a large empirical investigation on the use of pre-trained visual representations (PVRs) for training downstream policies that execute real-world tasks.
We can arrive at three insights: 1) the performance trends of PVRs in the simulation are generally indicative of their trends in the real world, 2) the use of PVRs enables a first-of-its-kind result with indoor ImageNav.
arXiv Detail & Related papers (2023-10-03T17:27:10Z) - SAPIEN: Affective Virtual Agents Powered by Large Language Models [2.423280064224919]
We introduce SAPIEN, a platform for high-fidelity virtual agents driven by large language models.
The platform allows users to customize their virtual agent's personality, background, and conversation premise.
After the virtual meeting, the user can choose to get the conversation analyzed and receive actionable feedback on their communication skills.
arXiv Detail & Related papers (2023-08-06T05:13:16Z) - Towards Ubiquitous Semantic Metaverse: Challenges, Approaches, and
Opportunities [68.03971716740823]
In recent years, ubiquitous semantic Metaverse has been studied to revolutionize immersive cyber-virtual experiences for augmented reality (AR) and virtual reality (VR) users.
This survey focuses on the representation and intelligence for the four fundamental system components in ubiquitous Metaverse.
arXiv Detail & Related papers (2023-07-13T11:14:46Z) - Virtual Guidance as a Mid-level Representation for Navigation [8.712750753534532]
"Virtual Guidance" is designed to visually represent non-visual instructional signals.
We evaluate our proposed method through experiments in both simulated and real-world settings.
arXiv Detail & Related papers (2023-03-05T17:55:15Z) - Unique Identification of 50,000+ Virtual Reality Users from Head & Hand
Motion Data [58.27542320038834]
We show that a large number of real VR users can be uniquely and reliably identified across multiple sessions using just their head and hand motion.
After training a classification model on 5 minutes of data per person, a user can be uniquely identified amongst the entire pool of 50,000+ with 94.33% accuracy from 100 seconds of motion.
This work is the first to truly demonstrate the extent to which biomechanics may serve as a unique identifier in VR, on par with widely used biometrics such as facial or fingerprint recognition.
arXiv Detail & Related papers (2023-02-17T15:05:18Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Automatic Recommendation of Strategies for Minimizing Discomfort in
Virtual Environments [58.720142291102135]
In this work, we first present a detailed review about possible causes of Cybersickness (CS)
Our system is able to suggest if the user may be entering in the next moments of the application into an illness situation.
The CSPQ (Cybersickness Profile Questionnaire) is also proposed, which is used to identify the player's susceptibility to CS.
arXiv Detail & Related papers (2020-06-27T19:28:48Z) - DeFINE: Delayed Feedback based Immersive Navigation Environment for
Studying Goal-Directed Human Navigation [10.7197371210731]
Delayed Feedback based Immersive Navigation Environment (DeFINE) is a framework that allows for easy creation and administration of navigation tasks.
DeFINE has a built-in capability to provide performance feedback to participants during an experiment.
arXiv Detail & Related papers (2020-03-06T11:00:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.