GSTM-HMU: Generative Spatio-Temporal Modeling for Human Mobility Understanding
- URL: http://arxiv.org/abs/2509.19135v1
- Date: Tue, 23 Sep 2025 15:20:38 GMT
- Title: GSTM-HMU: Generative Spatio-Temporal Modeling for Human Mobility Understanding
- Authors: Wenying Luo, Zhiyuan Lin, Wenhao Xu, Minghao Liu, Zhi Li,
- Abstract summary: We introduce GSTM-HMU, a generative temporal-temporal framework to advance mobility analysis.<n>We conduct experiments on four widely used real-world datasets, including Gowalla, WeePlace, Brightkite, and FourSquare.
- Score: 12.79351579779076
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Human mobility traces, often recorded as sequences of check-ins, provide a unique window into both short-term visiting patterns and persistent lifestyle regularities. In this work we introduce GSTM-HMU, a generative spatio-temporal framework designed to advance mobility analysis by explicitly modeling the semantic and temporal complexity of human movement. The framework consists of four key innovations. First, a Spatio-Temporal Concept Encoder (STCE) integrates geographic location, POI category semantics, and periodic temporal rhythms into unified vector representations. Second, a Cognitive Trajectory Memory (CTM) adaptively filters historical visits, emphasizing recent and behaviorally salient events in order to capture user intent more effectively. Third, a Lifestyle Concept Bank (LCB) contributes structured human preference cues, such as activity types and lifestyle patterns, to enhance interpretability and personalization. Finally, task-oriented generative heads transform the learned representations into predictions for multiple downstream tasks. We conduct extensive experiments on four widely used real-world datasets, including Gowalla, WeePlace, Brightkite, and FourSquare, and evaluate performance on three benchmark tasks: next-location prediction, trajectory-user identification, and time estimation. The results demonstrate consistent and substantial improvements over strong baselines, confirming the effectiveness of GSTM-HMU in extracting semantic regularities from complex mobility data. Beyond raw performance gains, our findings also suggest that generative modeling provides a promising foundation for building more robust, interpretable, and generalizable systems for human mobility intelligence.
Related papers
- Learning Multi-Modal Mobility Dynamics for Generalized Next Location Recommendation [51.00494428978262]
We leverage multi-modal spatial-temporal knowledge to characterize mobility dynamics for the location recommendation task.<n>First, we construct a unified spatial-temporal relational graph (STRG) for multi-modal representation.<n>Second, we design a gating mechanism to fuse spatial-temporal graph representations of different modalities.
arXiv Detail & Related papers (2025-12-27T14:23:04Z) - ST-ReP: Learning Predictive Representations Efficiently for Spatial-Temporal Forecasting [7.637123047745445]
Self-supervised methods are increasingly adapted to learn spatial-temporal representations.<n>Current value reconstruction and future value prediction are integrated into the pre-training framework.<n>Multi-time scale analysis is incorporated into the self-supervised loss to enhance predictive capability.
arXiv Detail & Related papers (2024-12-19T05:33:55Z) - DyG-Mamba: Continuous State Space Modeling on Dynamic Graphs [59.434893231950205]
Dynamic graph learning aims to uncover evolutionary laws in real-world systems.
We propose DyG-Mamba, a new continuous state space model for dynamic graph learning.
We show that DyG-Mamba achieves state-of-the-art performance on most datasets.
arXiv Detail & Related papers (2024-08-13T15:21:46Z) - Spatial-Temporal Cross-View Contrastive Pre-training for Check-in Sequence Representation Learning [21.580705078081078]
We propose a novel Spatial-Temporal Cross-view Contrastive Representation (ST CCR) framework for check-in sequence representation learning.
ST CCR employs self-supervision from "spatial topic" and "temporal intention" views, facilitating effective fusion of spatial and temporal information at the semantic level.
We extensively evaluate ST CCR on three real-world datasets and demonstrate its superior performance across three downstream tasks.
arXiv Detail & Related papers (2024-07-22T10:20:34Z) - Rethinking Urban Mobility Prediction: A Super-Multivariate Time Series
Forecasting Approach [71.67506068703314]
Long-term urban mobility predictions play a crucial role in the effective management of urban facilities and services.
Traditionally, urban mobility data has been structured as videos, treating longitude and latitude as fundamental pixels.
In our research, we introduce a fresh perspective on urban mobility prediction.
Instead of oversimplifying urban mobility data as traditional video data, we regard it as a complex time series.
arXiv Detail & Related papers (2023-12-04T07:39:05Z) - Spatio-Temporal Branching for Motion Prediction using Motion Increments [55.68088298632865]
Human motion prediction (HMP) has emerged as a popular research topic due to its diverse applications.
Traditional methods rely on hand-crafted features and machine learning techniques.
We propose a noveltemporal-temporal branching network using incremental information for HMP.
arXiv Detail & Related papers (2023-08-02T12:04:28Z) - OpenSTL: A Comprehensive Benchmark of Spatio-Temporal Predictive
Learning [67.07363529640784]
We propose OpenSTL to categorize prevalent approaches into recurrent-based and recurrent-free models.
We conduct standard evaluations on datasets across various domains, including synthetic moving object trajectory, human motion, driving scenes, traffic flow and forecasting weather.
We find that recurrent-free models achieve a good balance between efficiency and performance than recurrent models.
arXiv Detail & Related papers (2023-06-20T03:02:14Z) - Activity-aware Human Mobility Prediction with Hierarchical Graph Attention Recurrent Network [6.09493819953104]
We present Hierarchical Graph Attention Recurrent Network (HGARN) for human mobility prediction.<n> Specifically, we construct a hierarchical graph based on past mobility records and employ a Hierarchical Graph Attention Module to capture complex time-activity-location dependencies.<n>For model evaluation, we test the performance of HGARN against existing state-of-the-art methods in both the recurring (i.e., returning to a previously visited location) and explorative (i.e., visiting a new location) settings.
arXiv Detail & Related papers (2022-10-14T12:56:01Z) - Learning Dual Dynamic Representations on Time-Sliced User-Item
Interaction Graphs for Sequential Recommendation [62.30552176649873]
We devise a novel Dynamic Representation Learning model for Sequential Recommendation (DRL-SRe)
To better model the user-item interactions for characterizing the dynamics from both sides, the proposed model builds a global user-item interaction graph for each time slice.
To enable the model to capture fine-grained temporal information, we propose an auxiliary temporal prediction task over consecutive time slices.
arXiv Detail & Related papers (2021-09-24T07:44:27Z) - Individual Mobility Prediction: An Interpretable Activity-based Hidden
Markov Approach [6.1938383008964495]
This study develops an activity-based modeling framework for individual mobility prediction.
We show that the proposed model can achieve similar prediction performance as the state-of-the-art long-term short-term memory (LSTM) model.
arXiv Detail & Related papers (2021-01-11T16:11:27Z) - Episodic Memory for Learning Subjective-Timescale Models [1.933681537640272]
In model-based learning, an agent's model is commonly defined over transitions between consecutive states of an environment.
In contrast, intelligent behaviour in biological organisms is characterised by the ability to plan over varying temporal scales depending on the context.
We devise a novel approach to learning a transition dynamics model, based on the sequences of episodic memories that define the agent's subjective timescale.
arXiv Detail & Related papers (2020-10-03T21:55:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.