Towards Immersive Mixed Reality Street Play: Understanding Co-located Bodily Play with See-through Head-mounted Displays in Public Spaces
- URL: http://arxiv.org/abs/2505.12516v3
- Date: Sat, 01 Nov 2025 12:22:31 GMT
- Title: Towards Immersive Mixed Reality Street Play: Understanding Co-located Bodily Play with See-through Head-mounted Displays in Public Spaces
- Authors: Botao Amber Hu, Rem Rungu Lin, Yilan Elan Tao, Samuli Laato, Yue Li,
- Abstract summary: We study the social implications, challenges, opportunities, and design recommendations of Immersive Mixed Reality Street Play (IMRSP)<n>Our research-through-design game probe, Multiplayer Omnipresent Fighting Arena (MOFA), is deployed across diverse public venues.
- Score: 8.86575491701016
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As see-through Mixed Reality Head-Mounted Displays (MRHMDs) proliferate, their usage is gradually shifting from controlled, private settings to spontaneous, public contexts. While location-based augmented reality mobile games such as Pokemon GO have been successful, the embodied interaction afforded by MRHMDs moves play beyond phone-based screen-tapping toward co-located, bodily, movement-based play. In anticipation of widespread MRHMD adoption, major technology companies have teased concept videos envisioning urban streets as vast mixed reality playgrounds-imagine Harry Potter-style wizard duels in city streets-which we term Immersive Mixed Reality Street Play (IMRSP). However, few real-world studies examine such scenarios. Through empirical, in-the-wild studies of our research-through-design game probe, Multiplayer Omnipresent Fighting Arena (MOFA), deployed across diverse public venues, we offer initial insights into the social implications, challenges, opportunities, and design recommendations of IMRSP. The MOFA framework, which includes three gameplay modes-"The Training," "The Duel," and "The Dragon"-is open-sourced at https://github.com/realitydeslab/mofa.
Related papers
- EmbodMocap: In-the-Wild 4D Human-Scene Reconstruction for Embodied Agents [85.77432303199176]
We propose EmbodMocap, a portable and affordable data collection pipeline using two moving iPhones.<n>Our key idea is to jointly calibrate dual RGB-D sequences to reconstruct both humans and scenes.<n>Based on the collected data, we empower three embodied AI tasks: monocular human-scene-reconstruction, where we fine-tune feedforward models that output metric-scale, world-space aligned humans and scenes; physics-based character animation, where we prove our data could be used to scale human-object interaction skills and scene-aware motion tracking; and robot motion control, where we train a humanoid robot via
arXiv Detail & Related papers (2026-02-26T16:53:41Z) - Coarse-to-Real: Generative Rendering for Populated Dynamic Scenes [22.450051108066216]
We present C2R (Coarse-to-Real), a generative framework that synthesizes real-style urban crowd videos.<n>Our approach uses coarse 3D renderings to explicitly control scene layout, camera motion, and human trajectories.<n>It produces temporally consistent, controllable, and realistic urban scene videos from minimal 3D input.
arXiv Detail & Related papers (2026-01-29T20:29:04Z) - UrbanVerse: Scaling Urban Simulation by Watching City-Tour Videos [64.22243628420799]
We introduce UrbanVerse, a data-driven real-to-sim system that converts crowd-sourced city-tour videos into physics-aware, interactive simulation scenes.<n>Running in IsaacSim, UrbanVerse offers 160 high-quality constructed scenes from 24 countries, along with a curated benchmark of 10 artist-designed test scenes.<n>Experiments show that UrbanVerse scenes preserve real-world semantics and layouts, achieving human-evaluated realism comparable to manually crafted scenes.
arXiv Detail & Related papers (2025-10-16T17:42:34Z) - A Comprehensive Review of Multi-Agent Reinforcement Learning in Video Games [9.115787425836576]
Recent advancements in multi-agent reinforcement learning (MARL) have demonstrated its application potential in modern games.<n>MARL has proven capable of achieving superhuman performance across diverse game environments through techniques like self-play, supervised learning, and deep reinforcement learning.<n>This paper offers insights into MARL in video game AI systems, proposes a novel method to estimate game complexity, and suggests future research directions to advance MARL and its applications in game development.
arXiv Detail & Related papers (2025-09-03T20:05:58Z) - The Matrix: Infinite-Horizon World Generation with Real-Time Moving Control [16.075784652681172]
We present The Matrix, the first foundational realistic world simulator capable of generating continuous 720p real-scene video streams.<n>The Matrix allows users to traverse diverse terrains in continuous, uncut hour-long sequences.<n>The Matrix can simulate a BMW X3 driving through an office setting--an environment present in neither gaming data nor real-world sources.
arXiv Detail & Related papers (2024-12-04T18:59:05Z) - GRUtopia: Dream General Robots in a City at Scale [65.08318324604116]
This paper introduces project GRUtopia, the first simulated interactive 3D society designed for various robots.
GRScenes includes 100k interactive, finely annotated scenes, which can be freely combined into city-scale environments.
GRResidents is a Large Language Model (LLM) driven Non-Player Character (NPC) system that is responsible for social interaction.
arXiv Detail & Related papers (2024-07-15T17:40:46Z) - MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility [52.0930915607703]
Recent advances in Robotics and Embodied AI make public urban spaces no longer exclusive to humans.
Micromobility enabled by AI for short-distance travel in public urban spaces plays a crucial component in the future transportation system.
We present MetaUrban, a compositional simulation platform for the AI-driven urban micromobility research.
arXiv Detail & Related papers (2024-07-11T17:56:49Z) - HMD-NeMo: Online 3D Avatar Motion Generation From Sparse Observations [7.096701481970196]
Head-Mounted Devices (HMDs) typically only provide a few input signals, such as head and hands 6-DoF.
We propose the first unified approach, HMD-NeMo, that addresses plausible and accurate full body motion generation even when the hands may be only partially visible.
arXiv Detail & Related papers (2023-08-22T08:07:12Z) - Sonicverse: A Multisensory Simulation Platform for Embodied Household
Agents that See and Hear [65.33183123368804]
Sonicverse is a multisensory simulation platform with integrated audio-visual simulation.
It enables embodied AI tasks that need audio-visual perception.
An agent trained in Sonicverse can successfully perform audio-visual navigation in real-world environments.
arXiv Detail & Related papers (2023-06-01T17:24:01Z) - Virtual Reality in Metaverse over Wireless Networks with User-centered
Deep Reinforcement Learning [8.513938423514636]
We introduce a multi-user VR computation offloading over wireless communication scenario.
In addition, we devised a novel user-centered deep reinforcement learning approach to find a near-optimal solution.
arXiv Detail & Related papers (2023-03-08T03:10:41Z) - Towards a Pipeline for Real-Time Visualization of Faces for VR-based
Telepresence and Live Broadcasting Utilizing Neural Rendering [58.720142291102135]
Head-mounted displays (HMDs) for Virtual Reality pose a considerable obstacle for a realistic face-to-face conversation in VR.
We present an approach that focuses on low-cost hardware and can be used on a commodity gaming computer with a single GPU.
arXiv Detail & Related papers (2023-01-04T08:49:51Z) - Sim-to-Real via Sim-to-Seg: End-to-end Off-road Autonomous Driving
Without Real Data [56.49494318285391]
We present Sim2Seg, a re-imagining of RCAN that crosses the visual reality gap for off-road autonomous driving.
This is done by learning to translate randomized simulation images into simulated segmentation and depth maps.
This allows us to train an end-to-end RL policy in simulation, and directly deploy in the real-world.
arXiv Detail & Related papers (2022-10-25T17:50:36Z) - Augment Yourself: Mixed Reality Self-Augmentation Using Optical
See-through Head-mounted Displays and Physical Mirrors [49.49841698372575]
Optical see-though head-mounted displays (OST HMDs) are one of the key technologies for merging virtual objects and physical scenes to provide an immersive mixed reality (MR) environment to its user.
We propose a novel concept and prototype system that combines OST HMDs and physical mirrors to enable self-augmentation and provide an immersive MR environment centered around the user.
Our system, to the best of our knowledge the first of its kind, estimates the user's pose in the virtual image generated by the mirror using an RGBD camera attached to the HMD and anchors virtual objects to the reflection rather
arXiv Detail & Related papers (2020-07-06T16:53:47Z) - Visual Navigation Among Humans with Optimal Control as a Supervisor [72.5188978268463]
We propose an approach that combines learning-based perception with model-based optimal control to navigate among humans.
Our approach is enabled by our novel data-generation tool, HumANav.
We demonstrate that the learned navigation policies can anticipate and react to humans without explicitly predicting future human motion.
arXiv Detail & Related papers (2020-03-20T16:13:47Z) - Neural MMO v1.3: A Massively Multiagent Game Environment for Training
and Evaluating Neural Networks [48.5733173329785]
We present Neural MMO, a massively multiagent game environment inspired by MMOs.
We discuss our progress on two more general challenges in multiagent systems engineering for AI research: distributed infrastructure and game IO.
arXiv Detail & Related papers (2020-01-31T18:50:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.