JiuTian Chuanliu: A Large Spatiotemporal Model for General-purpose Dynamic Urban Sensing
- URL: http://arxiv.org/abs/2510.23662v1
- Date: Sun, 26 Oct 2025 10:04:28 GMT
- Title: JiuTian Chuanliu: A Large Spatiotemporal Model for General-purpose Dynamic Urban Sensing
- Authors: Liangzhe Han, Leilei Sun, Tongyu Zhu, Tao Tao, Jibin Wang, Weifeng Lv,
- Abstract summary: We introduce a framework named General-purpose and Dynamic Human Mobility Embedding (GDHME) for urban sensing.<n>In stage 1, GDHME treats people and regions as nodes within a dynamic graph, unifying human data as people-region-time interactions.<n>An autoregressive self-supervised task is specially designed to guide the learning of the general-purpose node embeddings.
- Score: 31.475610263075904
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: As a window for urban sensing, human mobility contains rich spatiotemporal information that reflects both residents' behavior preferences and the functions of urban areas. The analysis of human mobility has attracted the attention of many researchers. However, existing methods often address specific tasks from a particular perspective, leading to insufficient modeling of human mobility and limited applicability of the learned knowledge in various downstream applications. To address these challenges, this paper proposes to push massive amounts of human mobility data into a spatiotemporal model, discover latent semantics behind mobility behavior and support various urban sensing tasks. Specifically, a large-scale and widely covering human mobility data is collected through the ubiquitous base station system and a framework named General-purpose and Dynamic Human Mobility Embedding (GDHME) for urban sensing is introduced. The framework follows the self-supervised learning idea and contains two major stages. In stage 1, GDHME treats people and regions as nodes within a dynamic graph, unifying human mobility data as people-region-time interactions. An encoder operating in continuous-time dynamically computes evolving node representations, capturing dynamic states for both people and regions. Moreover, an autoregressive self-supervised task is specially designed to guide the learning of the general-purpose node embeddings. In stage 2, these representations are utilized to support various tasks. To evaluate the effectiveness of our GDHME framework, we further construct a multi-task urban sensing benchmark. Offline experiments demonstrate GDHME's ability to automatically learn valuable node features from vast amounts of data. Furthermore, our framework is used to deploy the JiuTian ChuanLiu Big Model, a system that has been presented at the 2023 China Mobile Worldwide Partner Conference.
Related papers
- Learning Multi-Modal Mobility Dynamics for Generalized Next Location Recommendation [51.00494428978262]
We leverage multi-modal spatial-temporal knowledge to characterize mobility dynamics for the location recommendation task.<n>First, we construct a unified spatial-temporal relational graph (STRG) for multi-modal representation.<n>Second, we design a gating mechanism to fuse spatial-temporal graph representations of different modalities.
arXiv Detail & Related papers (2025-12-27T14:23:04Z) - HHI-Assist: A Dataset and Benchmark of Human-Human Interaction in Physical Assistance Scenario [63.77482302352545]
HHI-Assist is a dataset comprising motion capture clips of human-human interactions in assistive tasks.<n>Our work has the potential to significantly enhance robotic assistance policies.
arXiv Detail & Related papers (2025-09-12T09:38:17Z) - UniMove: A Unified Model for Multi-city Human Mobility Prediction [18.826615430413373]
Human mobility prediction is vital for urban planning, transportation optimization, and personalized services.<n>Existing solutions often require training separate models for each city due to distinct spatial representations and geographic coverage.<n>We propose UniMove, a unified model for multi-city human mobility prediction.
arXiv Detail & Related papers (2025-08-09T13:47:22Z) - Identifying and Characterising Higher Order Interactions in Mobility Networks Using Hypergraphs [1.1060425537315088]
We propose co-visitation hypergraphs, a model that leverages temporal observation windows to extract group interactions between locations.<n>Using frequent pattern mining, our approach constructs hypergraphs that capture dynamic mobility behaviors across different spatial and temporal scales.<n>Our results demonstrate that our hypergraph-based mobility analysis framework is a valuable tool with potential applications in diverse fields.
arXiv Detail & Related papers (2025-03-24T11:29:06Z) - OmniRe: Omni Urban Scene Reconstruction [78.99262488964423]
We introduce OmniRe, a comprehensive system for creating high-fidelity digital twins of dynamic real-world scenes from on-device logs.<n>Our approach builds scene graphs on 3DGS and constructs multiple Gaussian representations in canonical spaces that model various dynamic actors.
arXiv Detail & Related papers (2024-08-29T17:56:33Z) - Deep Activity Model: A Generative Approach for Human Mobility Pattern Synthesis [11.90100976089832]
We develop a novel generative deep learning approach for human mobility modeling and synthesis.
It incorporates both activity patterns and location trajectories using open-source data.
The model can be fine-tuned with local data, allowing it to adapt to accurately represent mobility patterns across diverse regions.
arXiv Detail & Related papers (2024-05-24T02:04:10Z) - Regions are Who Walk Them: a Large Pre-trained Spatiotemporal Model
Based on Human Mobility for Ubiquitous Urban Sensing [24.48869607589127]
We propose a largetemporal model based on trajectories (RAW) to tap into the rich information within human mobility data.
Our proposed method, relying solely on human mobility data without additional features, exhibits certain level of relevance in user profiling and region analysis.
arXiv Detail & Related papers (2023-11-17T11:55:11Z) - Universal Humanoid Motion Representations for Physics-Based Control [71.46142106079292]
We present a universal motion representation that encompasses a comprehensive range of motor skills for physics-based humanoid control.
We first learn a motion imitator that can imitate all of human motion from a large, unstructured motion dataset.
We then create our motion representation by distilling skills directly from the imitator.
arXiv Detail & Related papers (2023-10-06T20:48:43Z) - Activity-aware Human Mobility Prediction with Hierarchical Graph Attention Recurrent Network [6.09493819953104]
We present Hierarchical Graph Attention Recurrent Network (HGARN) for human mobility prediction.<n> Specifically, we construct a hierarchical graph based on past mobility records and employ a Hierarchical Graph Attention Module to capture complex time-activity-location dependencies.<n>For model evaluation, we test the performance of HGARN against existing state-of-the-art methods in both the recurring (i.e., returning to a previously visited location) and explorative (i.e., visiting a new location) settings.
arXiv Detail & Related papers (2022-10-14T12:56:01Z) - GIMO: Gaze-Informed Human Motion Prediction in Context [75.52839760700833]
We propose a large-scale human motion dataset that delivers high-quality body pose sequences, scene scans, and ego-centric views with eye gaze.
Our data collection is not tied to specific scenes, which further boosts the motion dynamics observed from our subjects.
To realize the full potential of gaze, we propose a novel network architecture that enables bidirectional communication between the gaze and motion branches.
arXiv Detail & Related papers (2022-04-20T13:17:39Z) - TRiPOD: Human Trajectory and Pose Dynamics Forecasting in the Wild [77.59069361196404]
TRiPOD is a novel method for predicting body dynamics based on graph attentional networks.
To incorporate a real-world challenge, we learn an indicator representing whether an estimated body joint is visible/invisible at each frame.
Our evaluation shows that TRiPOD outperforms all prior work and state-of-the-art specifically designed for each of the trajectory and pose forecasting tasks.
arXiv Detail & Related papers (2021-04-08T20:01:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.