Enhancing Large Language Models for Mobility Analytics with Semantic Location Tokenization
- URL: http://arxiv.org/abs/2506.11109v1
- Date: Sun, 08 Jun 2025 02:17:50 GMT
- Title: Enhancing Large Language Models for Mobility Analytics with Semantic Location Tokenization
- Authors: Yile Chen, Yicheng Tao, Yue Jiang, Shuai Liu, Han Yu, Gao Cong,
- Abstract summary: We propose QT-Mob, a novel framework that significantly enhances Large Language Models (LLMs) for mobility analytics.<n> QT-Mob introduces a location tokenization module that learns compact, semantically rich tokens to represent locations.<n>Experiments on three real-world dataset demonstrate the superior performance in both next-location prediction and mobility recovery tasks.
- Score: 29.17336622418242
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of location-based services has led to the generation of vast amounts of mobility data, providing significant opportunities to model user movement dynamics within urban environments. Recent advancements have focused on adapting Large Language Models (LLMs) for mobility analytics. However, existing methods face two primary limitations: inadequate semantic representation of locations (i.e., discrete IDs) and insufficient modeling of mobility signals within LLMs (i.e., single templated instruction fine-tuning). To address these issues, we propose QT-Mob, a novel framework that significantly enhances LLMs for mobility analytics. QT-Mob introduces a location tokenization module that learns compact, semantically rich tokens to represent locations, preserving contextual information while ensuring compatibility with LLMs. Furthermore, QT-Mob incorporates a series of complementary fine-tuning objectives that align the learned tokens with the internal representations in LLMs, improving the model's comprehension of sequential movement patterns and location semantics. The proposed QT-Mob framework not only enhances LLMs' ability to interpret mobility data but also provides a more generalizable approach for various mobility analytics tasks. Experiments on three real-world dataset demonstrate the superior performance in both next-location prediction and mobility recovery tasks, outperforming existing deep learning and LLM-based methods.
Related papers
- Exploring the Roles of Large Language Models in Reshaping Transportation Systems: A Survey, Framework, and Roadmap [51.198001060683296]
Large Language Models (LLMs) offer transformative potential to address transportation challenges.<n>This survey first presents LLM4TR, a novel conceptual framework that systematically categorizes the roles of LLMs in transportation.<n>For each role, our review spans diverse applications, from traffic prediction and autonomous driving to safety analytics and urban mobility optimization.
arXiv Detail & Related papers (2025-03-27T11:56:27Z) - A Foundational individual Mobility Prediction Model based on Open-Source Large Language Models [0.0]
Large Language Models (LLMs) are widely applied to domain-specific tasks.<n>This paper proposes a unified fine-tuning framework to train a foundational open source LLM-based mobility prediction model.
arXiv Detail & Related papers (2025-03-19T15:08:37Z) - Navigating Motion Agents in Dynamic and Cluttered Environments through LLM Reasoning [69.5875073447454]
This paper advances motion agents empowered by large language models (LLMs) toward autonomous navigation in dynamic and cluttered environments.<n>Our training-free framework supports multi-agent coordination, closed-loop replanning, and dynamic obstacle avoidance without retraining or fine-tuning.
arXiv Detail & Related papers (2025-03-10T13:39:09Z) - Scaling Autonomous Agents via Automatic Reward Modeling And Planning [52.39395405893965]
Large language models (LLMs) have demonstrated remarkable capabilities across a range of tasks.<n>However, they still struggle with problems requiring multi-step decision-making and environmental feedback.<n>We propose a framework that can automatically learn a reward model from the environment without human annotations.
arXiv Detail & Related papers (2025-02-17T18:49:25Z) - Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models [22.680033463634732]
Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences.
Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences.
We present Mobility-LLM, a novel framework that leverages large language models to analyze check-in sequences for multiple tasks.
arXiv Detail & Related papers (2024-10-29T01:58:06Z) - LIMP: Large Language Model Enhanced Intent-aware Mobility Prediction [5.7042182940772275]
We propose a novel LIMP (LLMs for Intent-ware Mobility Prediction) framework.
Specifically, LIMP introduces an "Analyze-Abstract-Infer" (A2I) agentic workflow to unleash LLMs commonsense reasoning power for mobility intention inference.
We evaluate LIMP on two real-world datasets, demonstrating improved accuracy in next-location prediction and effective intention inference.
arXiv Detail & Related papers (2024-08-23T04:28:56Z) - The Synergy between Data and Multi-Modal Large Language Models: A Survey from Co-Development Perspective [53.48484062444108]
We find that the development of models and data is not two separate paths but rather interconnected.
On the one hand, vaster and higher-quality data contribute to better performance of MLLMs; on the other hand, MLLMs can facilitate the development of data.
To promote the data-model co-development for MLLM community, we systematically review existing works related to MLLMs from the data-model co-development perspective.
arXiv Detail & Related papers (2024-07-11T15:08:11Z) - MARS: Benchmarking the Metaphysical Reasoning Abilities of Language Models with a Multi-task Evaluation Dataset [50.36095192314595]
Large Language Models (LLMs) function as conscious agents with generalizable reasoning capabilities.<n>This ability remains underexplored due to the complexity of modeling infinite possible changes in an event.<n>We introduce the first-ever benchmark, MARS, comprising three tasks corresponding to each step.
arXiv Detail & Related papers (2024-06-04T08:35:04Z) - Traj-LLM: A New Exploration for Empowering Trajectory Prediction with Pre-trained Large Language Models [12.687494201105066]
This paper proposes Traj-LLM, the first to investigate the potential of using Large Language Models (LLMs) to generate future motion from agents' past/observed trajectories and scene semantics.
LLMs' powerful comprehension abilities capture a spectrum of high-level scene knowledge and interactive information.
Emulating the human-like lane focus cognitive function, we introduce lane-aware probabilistic learning powered by the pioneering Mamba module.
arXiv Detail & Related papers (2024-05-08T09:28:04Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Large Language Models as Urban Residents: An LLM Agent Framework for Personal Mobility Generation [19.566466895173924]
This paper introduces a novel approach using Large Language Models (LLMs) integrated into an agent framework for personal mobility generation.
Our approach addresses three research questions: aligning LLMs with real-world urban mobility data, developing reliable activity generation strategies, and exploring LLM applications in urban mobility.
arXiv Detail & Related papers (2024-02-22T18:03:14Z) - Chain-of-Planned-Behaviour Workflow Elicits Few-Shot Mobility Generation in LLMs [20.70758465552438]
Chain-of-Planned Behaviour significantly reduces the error rate of mobility intention generation from 57.8% to 19.4%.
We find mechanistic mobility models, such as gravity model, can effectively map mobility intentions to physical mobility.
The proposed CoPB workflow can facilitate GPT-4-turbo to automatically generate high quality labels for mobility behaviour reasoning.
arXiv Detail & Related papers (2024-02-15T09:58:23Z) - Large Language Models Are Latent Variable Models: Explaining and Finding
Good Demonstrations for In-Context Learning [104.58874584354787]
In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning.
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
arXiv Detail & Related papers (2023-01-27T18:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.