Human Mobility Modeling with Limited Information via Large Language Models
- URL: http://arxiv.org/abs/2409.17495v1
- Date: Thu, 26 Sep 2024 03:07:32 GMT
- Title: Human Mobility Modeling with Limited Information via Large Language Models
- Authors: Yifan Liu, Xishun Liao, Haoxuan Ma, Brian Yueshuai He, Chris Stanford, Jiaqi Ma,
- Abstract summary: We propose an innovative Large Language Model (LLM) empowered human mobility modeling framework.
Our proposed approach significantly reduces the reliance on detailed human mobility statistical data.
We have validated our results using the NHTS and SCAG-ABM datasets.
- Score: 11.90100976089832
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Understanding human mobility patterns has traditionally been a complex challenge in transportation modeling. Due to the difficulties in obtaining high-quality training datasets across diverse locations, conventional activity-based models and learning-based human mobility modeling algorithms are particularly limited by the availability and quality of datasets. Furthermore, current research mainly focuses on the spatial-temporal travel pattern but lacks an understanding of the semantic information between activities, which is crucial for modeling the interdependence between activities. In this paper, we propose an innovative Large Language Model (LLM) empowered human mobility modeling framework. Our proposed approach significantly reduces the reliance on detailed human mobility statistical data, utilizing basic socio-demographic information of individuals to generate their daily mobility patterns. We have validated our results using the NHTS and SCAG-ABM datasets, demonstrating the effective modeling of mobility patterns and the strong adaptability of our framework across various geographic locations.
Related papers
- Quo Vadis, Motion Generation? From Large Language Models to Large Motion Models [70.78051873517285]
We present MotionBase, the first million-level motion generation benchmark.
By leveraging this vast dataset, our large motion model demonstrates strong performance across a broad range of motions.
We introduce a novel 2D lookup-free approach for motion tokenization, which preserves motion information and expands codebook capacity.
arXiv Detail & Related papers (2024-10-04T10:48:54Z) - Reconstructing Human Mobility Pattern: A Semi-Supervised Approach for Cross-Dataset Transfer Learning [10.864774173935535]
We have developed a model that reconstructs and learns human mobility patterns by focusing on semantic activity chains.
We introduce a semi-supervised iterative transfer learning algorithm to adapt models to diverse geographical contexts.
arXiv Detail & Related papers (2024-10-03T20:29:56Z) - Deep Activity Model: A Generative Approach for Human Mobility Pattern Synthesis [11.90100976089832]
We develop a novel generative deep learning approach for human mobility modeling and synthesis.
It incorporates both activity patterns and location trajectories using open-source data.
The model can be fine-tuned with local data, allowing it to adapt to accurately represent mobility patterns across diverse regions.
arXiv Detail & Related papers (2024-05-24T02:04:10Z) - Exploring Model Transferability through the Lens of Potential Energy [78.60851825944212]
Transfer learning has become crucial in computer vision tasks due to the vast availability of pre-trained deep learning models.
Existing methods for measuring the transferability of pre-trained models rely on statistical correlations between encoded static features and task labels.
We present an insightful physics-inspired approach named PED to address these challenges.
arXiv Detail & Related papers (2023-08-29T07:15:57Z) - Scaling Vision-Language Models with Sparse Mixture of Experts [128.0882767889029]
We show that mixture-of-experts (MoE) techniques can achieve state-of-the-art performance on a range of benchmarks over dense models of equivalent computational cost.
Our research offers valuable insights into stabilizing the training of MoE models, understanding the impact of MoE on model interpretability, and balancing the trade-offs between compute performance when scaling vision-language models.
arXiv Detail & Related papers (2023-03-13T16:00:31Z) - Learning to Simulate Daily Activities via Modeling Dynamic Human Needs [24.792813473159505]
We propose a knowledge-driven simulation framework based on generative adversarial imitation learning.
Our core idea is to model the evolution of human needs as the underlying mechanism that drives activity generation in the simulation model.
Our framework outperforms the state-of-the-art baselines in terms of data fidelity and utility.
arXiv Detail & Related papers (2023-02-09T12:30:55Z) - Transformer Inertial Poser: Attention-based Real-time Human Motion
Reconstruction from Sparse IMUs [79.72586714047199]
We propose an attention-based deep learning method to reconstruct full-body motion from six IMU sensors in real-time.
Our method achieves new state-of-the-art results both quantitatively and qualitatively, while being simple to implement and smaller in size.
arXiv Detail & Related papers (2022-03-29T16:24:52Z) - DIME: Fine-grained Interpretations of Multimodal Models via Disentangled
Local Explanations [119.1953397679783]
We focus on advancing the state-of-the-art in interpreting multimodal models.
Our proposed approach, DIME, enables accurate and fine-grained analysis of multimodal models.
arXiv Detail & Related papers (2022-03-03T20:52:47Z) - Multi-agent Trajectory Prediction with Fuzzy Query Attention [15.12743751614964]
Trajectory prediction for scenes with multiple agents is a challenging problem in numerous domains such as traffic prediction, pedestrian tracking and path planning.
We present a general architecture to address this challenge which models the crucial inductive biases of motion, namely, inertia, relative motion, intents and interactions.
arXiv Detail & Related papers (2020-10-29T19:12:12Z) - Goal-Aware Prediction: Learning to Model What Matters [105.43098326577434]
One of the fundamental challenges in using a learned forward dynamics model is the mismatch between the objective of the learned model and that of the downstream planner or policy.
We propose to direct prediction towards task relevant information, enabling the model to be aware of the current task and encouraging it to only model relevant quantities of the state space.
We find that our method more effectively models the relevant parts of the scene conditioned on the goal, and as a result outperforms standard task-agnostic dynamics models and model-free reinforcement learning.
arXiv Detail & Related papers (2020-07-14T16:42:59Z) - TraLFM: Latent Factor Modeling of Traffic Trajectory Data [16.010576606023417]
We propose a novel generative model called TraLFM to mine human mobility patterns underlying traffic trajectories.
TraLFM is based on three key observations: (1) human mobility patterns are reflected by the sequences of locations in the trajectories; (2) human mobility patterns vary with people; and (3) human mobility patterns tend to be cyclical and change over time.
arXiv Detail & Related papers (2020-03-16T04:41:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.