Sight, Sound and Smell in Immersive Experiences of Urban History: Virtual Vauxhall Gardens Case Study
- URL: http://arxiv.org/abs/2505.13612v1
- Date: Mon, 19 May 2025 18:00:42 GMT
- Title: Sight, Sound and Smell in Immersive Experiences of Urban History: Virtual Vauxhall Gardens Case Study
- Authors: Tim Pearce, David Souto, Douglas Barrett, Benjamin Lok, Mateusz Bocian, Artur Soczawa-Stronczyk, Giasemi Vavoula, Paul Long, Avinash Bhangaonkar, Stephanie Bowry, Michaela Butter, David Coke, Kate Loveman, Rosemary Sweet, Lars Tharp, Jeremy Webster, Hongji Yang, Robin Green, Andrew Hugill,
- Abstract summary: This research investigates how multisensory experiences involving olfaction can be effectively integrated into VR reconstructions of historical spaces.<n>In the context of a VR reconstruction of London's eighteenth-century Vauxhall Pleasure Gardens, we developed a networked portable olfactory display.<n>Our results show that integrating synchronized olfactory stimuli into the VR experience can enhance user engagement and be perceived positively.
- Score: 2.0897860130200443
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore the integration of multisensory elements in virtual reality reconstructions of historical spaces through a case study of the Virtual Vauxhall Gardens project. While visual and auditory components have become standard in digital heritage experiences, the addition of olfactory stimuli remains underexplored, despite its powerful connection to memory and emotional engagement. This research investigates how multisensory experiences involving olfaction can be effectively integrated into VR reconstructions of historical spaces to enhance presence and engagement with cultural heritage. In the context of a VR reconstruction of London's eighteenth-century Vauxhall Pleasure Gardens, we developed a networked portable olfactory display capable of synchronizing specific scents with visual and auditory elements at pivotal moments in the virtual experience. Our evaluation methodology assesses both technical implementation and user experience, measuring presence, and usability metrics across diverse participant groups. Our results show that integrating synchronized olfactory stimuli into the VR experience can enhance user engagement and be perceived positively, contributing to a unique and immersive encounter with historical settings. While presence questionnaires indicated a strong sense of auditory presence and control, with other sensory factors rated moderately, user experience of attractiveness was exceptionally high; qualitative feedback suggested heightened sensory awareness and engagement influenced by the inclusion and anticipation of smell. Our results suggest that evaluating multisensory VR heritage experiences requires a nuanced approach, as standard usability metrics may be ill-suited and 'realism' might be less critical than creating an evocative, historically informed, and emotionally resonant experience......
Related papers
- Imagine, Verify, Execute: Memory-Guided Agentic Exploration with Vision-Language Models [60.675955082094944]
We present IVE, an agentic exploration framework inspired by human curiosity.<n>We evaluate IVE in both simulated and real-world tabletop environments.
arXiv Detail & Related papers (2025-05-12T17:59:11Z) - SkillMimic-V2: Learning Robust and Generalizable Interaction Skills from Sparse and Noisy Demonstrations [68.9300049150948]
We address a fundamental challenge in Reinforcement Learning from Interaction Demonstration (RLID)<n>Existing data collection approaches yield sparse, disconnected, and noisy trajectories that fail to capture the full spectrum of possible skill variations and transitions.<n>We present two data augmentation techniques: a Stitched Trajectory Graph (STG) that discovers potential transitions between demonstration skills, and a State Transition Field (STF) that establishes unique connections for arbitrary states within the demonstration neighborhood.
arXiv Detail & Related papers (2025-05-04T13:00:29Z) - Exploring Context-aware and LLM-driven Locomotion for Immersive Virtual Reality [8.469329222500726]
We propose a novel locomotion technique powered by large language models (LLMs)<n>We evaluate three locomotion methods: controller-based teleportation, voice-based steering, and our language model-driven approach.<n>Our findings indicate that the LLM-driven locomotion possesses comparable usability, presence, and cybersickness scores to established methods.
arXiv Detail & Related papers (2025-04-24T07:48:09Z) - ESVQA: Perceptual Quality Assessment of Egocentric Spatial Videos [71.62145804686062]
We introduce the first Egocentric Spatial Video Quality Assessment Database (ESVQAD), which comprises 600 egocentric spatial videos and their mean opinion scores (MOSs)<n>We propose a novel multi-dimensional binocular feature fusion model, termed ESVQAnet, which integrates binocular spatial, motion, and semantic features to predict the perceptual quality.<n> Experimental results demonstrate the ESVQAnet outperforms 16 state-of-the-art VQA models on the embodied perceptual quality assessment task.
arXiv Detail & Related papers (2024-12-29T10:13:30Z) - Exploring Eye Tracking to Detect Cognitive Load in Complex Virtual Reality Training [11.83314968015781]
We present an ongoing study to detect users' cognitive load using an eye-tracking-based machine learning approach.
We developed a VR training system for cold spray and tested it with 22 participants.
Preliminary analysis demonstrates the feasibility of using eye-tracking to detect cognitive load in complex VR experiences.
arXiv Detail & Related papers (2024-11-18T16:44:19Z) - Tremor Reduction for Accessible Ray Based Interaction in VR Applications [0.0]
Many traditional 2D interface interaction methods have been directly converted to work in a VR space with little alteration to the input mechanism.
In this paper we propose the use of a low pass filter, to normalize user input noise, alleviating fine motor requirements during ray-based interaction.
arXiv Detail & Related papers (2024-05-12T17:07:16Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - Learning beyond sensations: how dreams organize neuronal representations [1.749248408967819]
We discuss two complementary learning principles that organize representations through the generation of virtual experiences.
These principles are compatible with known cortical structure and dynamics and the phenomenology of sleep.
arXiv Detail & Related papers (2023-08-03T15:45:12Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Learning Perceptual Locomotion on Uneven Terrains using Sparse Visual
Observations [75.60524561611008]
This work aims to exploit the use of sparse visual observations to achieve perceptual locomotion over a range of commonly seen bumps, ramps, and stairs in human-centred environments.
We first formulate the selection of minimal visual input that can represent the uneven surfaces of interest, and propose a learning framework that integrates such exteroceptive and proprioceptive data.
We validate the learned policy in tasks that require omnidirectional walking over flat ground and forward locomotion over terrains with obstacles, showing a high success rate.
arXiv Detail & Related papers (2021-09-28T20:25:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.