Where Would I Go Next? Large Language Models as Human Mobility
Predictors
- URL: http://arxiv.org/abs/2308.15197v2
- Date: Tue, 9 Jan 2024 14:08:03 GMT
- Title: Where Would I Go Next? Large Language Models as Human Mobility
Predictors
- Authors: Xinglei Wang, Meng Fang, Zichao Zeng, Tao Cheng
- Abstract summary: We introduce a novel method, LLM-Mob, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data.
Comprehensive evaluations of our method reveal that LLM-Mob excels in providing accurate and interpretable predictions.
- Score: 21.100313868232995
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate human mobility prediction underpins many important applications
across a variety of domains, including epidemic modelling, transport planning,
and emergency responses. Due to the sparsity of mobility data and the
stochastic nature of people's daily activities, achieving precise predictions
of people's locations remains a challenge. While recently developed large
language models (LLMs) have demonstrated superior performance across numerous
language-related tasks, their applicability to human mobility studies remains
unexplored. Addressing this gap, this article delves into the potential of LLMs
for human mobility prediction tasks. We introduce a novel method, LLM-Mob,
which leverages the language understanding and reasoning capabilities of LLMs
for analysing human mobility data. We present concepts of historical stays and
context stays to capture both long-term and short-term dependencies in human
movement and enable time-aware prediction by using time information of the
prediction target. Additionally, we design context-inclusive prompts that
enable LLMs to generate more accurate predictions. Comprehensive evaluations of
our method reveal that LLM-Mob excels in providing accurate and interpretable
predictions, highlighting the untapped potential of LLMs in advancing human
mobility prediction techniques. We posit that our research marks a significant
paradigm shift in human mobility modelling, transitioning from building complex
domain-specific models to harnessing general-purpose LLMs that yield accurate
predictions through language instructions. The code for this work is available
at https://github.com/xlwang233/LLM-Mob.
Related papers
- Mobility-LLM: Learning Visiting Intentions and Travel Preferences from Human Mobility Data with Large Language Models [22.680033463634732]
Location-based services (LBS) have accumulated extensive human mobility data on diverse behaviors through check-in sequences.
Yet, existing models analyzing check-in sequences fail to consider the semantics contained in these sequences.
We present Mobility-LLM, a novel framework that leverages large language models to analyze check-in sequences for multiple tasks.
arXiv Detail & Related papers (2024-10-29T01:58:06Z) - AgentMove: Predicting Human Mobility Anywhere Using Large Language Model based Agentic Framework [7.007450097312181]
We introduce AgentMove, a systematic agentic prediction framework to achieve generalized mobility prediction for any cities worldwide.
In AgentMove, we first decompose the mobility prediction task into three sub-tasks and then design corresponding modules to complete these subtasks.
Experiments on mobility data from two sources in 12 cities demonstrate that AgentMove outperforms the best baseline more than 8% in various metrics.
arXiv Detail & Related papers (2024-08-26T02:36:55Z) - LIMP: Large Language Model Enhanced Intent-aware Mobility Prediction [5.7042182940772275]
We propose a novel LIMP (LLMs for Intent-ware Mobility Prediction) framework.
Specifically, LIMP introduces an "Analyze-Abstract-Infer" (A2I) agentic workflow to unleash LLMs commonsense reasoning power for mobility intention inference.
We evaluate LIMP on two real-world datasets, demonstrating improved accuracy in next-location prediction and effective intention inference.
arXiv Detail & Related papers (2024-08-23T04:28:56Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities for critical thinking in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - A Survey on Human Preference Learning for Large Language Models [81.41868485811625]
The recent surge of versatile large language models (LLMs) largely depends on aligning increasingly capable foundation models with human intentions by preference learning.
This survey covers the sources and formats of preference feedback, the modeling and usage of preference signals, as well as the evaluation of the aligned LLMs.
arXiv Detail & Related papers (2024-06-17T03:52:51Z) - Traj-LLM: A New Exploration for Empowering Trajectory Prediction with Pre-trained Large Language Models [12.687494201105066]
This paper proposes Traj-LLM, the first to investigate the potential of using Large Language Models (LLMs) to generate future motion from agents' past/observed trajectories and scene semantics.
LLMs' powerful comprehension abilities capture a spectrum of high-level scene knowledge and interactive information.
Emulating the human-like lane focus cognitive function, we introduce lane-aware probabilistic learning powered by the pioneering Mamba module.
arXiv Detail & Related papers (2024-05-08T09:28:04Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Rethinking Interpretability in the Era of Large Language Models [76.1947554386879]
Large language models (LLMs) have demonstrated remarkable capabilities across a wide array of tasks.
The capability to explain in natural language allows LLMs to expand the scale and complexity of patterns that can be given to a human.
These new capabilities raise new challenges, such as hallucinated explanations and immense computational costs.
arXiv Detail & Related papers (2024-01-30T17:38:54Z) - Large Language Models for Spatial Trajectory Patterns Mining [9.70298494476926]
Large language models (LLMs) have demonstrated their ability to reason in a manner akin to humans.
This presents significant potential for analyzing temporal patterns in human mobility.
Our work provides insights on the strengths and limitations of LLMs for human spatial trajectory analysis.
arXiv Detail & Related papers (2023-10-07T23:21:29Z) - A Survey of Large Language Models [81.06947636926638]
Language modeling has been widely studied for language understanding and generation in the past two decades.
Recently, pre-trained language models (PLMs) have been proposed by pre-training Transformer models over large-scale corpora.
To discriminate the difference in parameter scale, the research community has coined the term large language models (LLM) for the PLMs of significant size.
arXiv Detail & Related papers (2023-03-31T17:28:46Z) - Large Language Models Are Latent Variable Models: Explaining and Finding
Good Demonstrations for In-Context Learning [104.58874584354787]
In recent years, pre-trained large language models (LLMs) have demonstrated remarkable efficiency in achieving an inference-time few-shot learning capability known as in-context learning.
This study aims to examine the in-context learning phenomenon through a Bayesian lens, viewing real-world LLMs as latent variable models.
arXiv Detail & Related papers (2023-01-27T18:59:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.