Human Mobility in the Metaverse
- URL: http://arxiv.org/abs/2404.03071v1
- Date: Wed, 3 Apr 2024 21:26:40 GMT
- Title: Human Mobility in the Metaverse
- Authors: Kishore Vasan, Marton Karsai, Albert-Laszlo Barabasi,
- Abstract summary: We find that despite the absence of commuting costs, an individuals inclination to explore new locations diminishes over time.
We also find a lack of correlation between land prices and visitation, a deviation from the patterns characterizing the physical world.
Our ability to predict the characteristics of the emerging meta mobility network implies that the laws governing human mobility are rooted in fundamental patterns of human dynamics.
- Score: 0.03072340427031969
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The metaverse promises a shift in the way humans interact with each other, and with their digital and physical environments. The lack of geographical boundaries and travel costs in the metaverse prompts us to ask if the fundamental laws that govern human mobility in the physical world apply. We collected data on avatar movements, along with their network mobility extracted from NFT purchases. We find that despite the absence of commuting costs, an individuals inclination to explore new locations diminishes over time, limiting movement to a small fraction of the metaverse. We also find a lack of correlation between land prices and visitation, a deviation from the patterns characterizing the physical world. Finally, we identify the scaling laws that characterize meta mobility and show that we need to add preferential selection to the existing models to explain quantitative patterns of metaverse mobility. Our ability to predict the characteristics of the emerging meta mobility network implies that the laws governing human mobility are rooted in fundamental patterns of human dynamics, rather than the nature of space and cost of movement.
Related papers
- HUMOS: Human Motion Model Conditioned on Body Shape [54.20419874234214]
We introduce a new approach to develop a generative motion model based on body shape.
We show that it's possible to train this model using unpaired data.
The resulting model generates diverse, physically plausible, and dynamically stable human motions.
arXiv Detail & Related papers (2024-09-05T23:50:57Z) - Be More Real: Travel Diary Generation Using LLM Agents and Individual Profiles [21.72229002939936]
This study presents an agent-based framework (MobAgent) to generate realistic trajectories conforming to real world contexts.
We validate our framework with 0.2 million travel survey data, demonstrating its effectiveness in producing personalized and accurate travel diaries.
This study highlights the capacity of LLMs to provide detailed and sophisticated understanding of human mobility through the real-world mobility data.
arXiv Detail & Related papers (2024-07-10T09:11:57Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Priority-Centric Human Motion Generation in Discrete Latent Space [59.401128190423535]
We introduce a Priority-Centric Motion Discrete Diffusion Model (M2DM) for text-to-motion generation.
M2DM incorporates a global self-attention mechanism and a regularization term to counteract code collapse.
We also present a motion discrete diffusion model that employs an innovative noise schedule, determined by the significance of each motion token.
arXiv Detail & Related papers (2023-08-28T10:40:16Z) - Spatiotemporal-Augmented Graph Neural Networks for Human Mobility Simulation [35.89805766554052]
We propose a novel framework to model the dynamictemporal effects of locations, namely SRpatio-Augmented gaph neural networks.
The STAR framework designs varioustemporal graphs to capture the behaviors correspondence and builds a novel branch to simulate the varying dwells in locations, which duration is finally optimized in an adversarial manner.
arXiv Detail & Related papers (2023-06-15T11:47:45Z) - CrowdWeb: A Visualization Tool for Mobility Patterns in Smart Cities [0.39373541926236766]
The accuracy of current mobility prediction models is less than 25%.
We propose a web platform to visualize human mobility patterns.
We extend the platform to visualize the mobility of multiple users from a city-scale perspective.
arXiv Detail & Related papers (2023-05-22T11:30:00Z) - Human MotionFormer: Transferring Human Motions with Vision Transformers [73.48118882676276]
Human motion transfer aims to transfer motions from a target dynamic person to a source static one for motion synthesis.
We propose Human MotionFormer, a hierarchical ViT framework that leverages global and local perceptions to capture large and subtle motion matching.
Experiments show that our Human MotionFormer sets the new state-of-the-art performance both qualitatively and quantitatively.
arXiv Detail & Related papers (2023-02-22T11:42:44Z) - IMAP: Individual huMAn mobility Patterns visualizing platform [0.39373541926236766]
Existing models' accuracy in predicting users' mobility patterns is less than 25%.
We propose a novel perspective to study and analyze human mobility patterns and capture their flexibility.
Our platform enables users to visualize a graph of the places they visited based on their history records.
arXiv Detail & Related papers (2022-09-08T07:43:54Z) - GIMO: Gaze-Informed Human Motion Prediction in Context [75.52839760700833]
We propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, and ego-centric views with eye gaze.
Our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects.
To realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches.
arXiv Detail & Related papers (2022-04-20T13:17:39Z) - Scene-aware Generative Network for Human Motion Synthesis [125.21079898942347]
We propose a new framework, with the interaction between the scene and the human motion taken into account.
Considering the uncertainty of human motion, we formulate this task as a generative task.
We derive a GAN based learning approach, with discriminators to enforce the compatibility between the human motion and the contextual scene.
arXiv Detail & Related papers (2021-05-31T09:05:50Z) - Flow descriptors of human mobility networks [0.0]
We propose a systematic analysis to characterize mobility network flows and topology and assess their impact into individual traces.
This framework is suitable to assess urban planning, optimize transportation, measure the impact of external events and conditions, monitor internal dynamics and profile users according to their movement patterns.
arXiv Detail & Related papers (2020-03-16T15:27:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.