Large Language Models for Spatial Trajectory Patterns Mining
- URL: http://arxiv.org/abs/2310.04942v1
- Date: Sat, 7 Oct 2023 23:21:29 GMT
- Title: Large Language Models for Spatial Trajectory Patterns Mining
- Authors: Zheng Zhang, Hossein Amiri, Zhenke Liu, Andreas Z\"ufle, Liang Zhao
- Abstract summary: Large language models (LLMs) have demonstrated their ability to reason in a manner akin to humans.
This presents significant potential for analyzing temporal patterns in human mobility.
Our work provides insights on the strengths and limitations of LLMs for human spatial trajectory analysis.
- Score: 9.70298494476926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Identifying anomalous human spatial trajectory patterns can indicate dynamic
changes in mobility behavior with applications in domains like infectious
disease monitoring and elderly care. Recent advancements in large language
models (LLMs) have demonstrated their ability to reason in a manner akin to
humans. This presents significant potential for analyzing temporal patterns in
human mobility. In this paper, we conduct empirical studies to assess the
capabilities of leading LLMs like GPT-4 and Claude-2 in detecting anomalous
behaviors from mobility data, by comparing to specialized methods. Our key
findings demonstrate that LLMs can attain reasonable anomaly detection
performance even without any specific cues. In addition, providing contextual
clues about potential irregularities could further enhances their prediction
efficacy. Moreover, LLMs can provide reasonable explanations for their
judgments, thereby improving transparency. Our work provides insights on the
strengths and limitations of LLMs for human spatial trajectory analysis.
Related papers
- Causality for Large Language Models [37.10970529459278]
Large language models (LLMs) with billions or trillions of parameters are trained on vast datasets, achieving unprecedented success across a series of language tasks.
Recent research highlights that LLMs function as causal parrots, capable of reciting causal knowledge without truly understanding or applying it.
This survey aims to explore how causality can enhance LLMs at every stage of their lifecycle.
arXiv Detail & Related papers (2024-10-20T07:22:23Z) - The LLM Effect: Are Humans Truly Using LLMs, or Are They Being Influenced By Them Instead? [60.01746782465275]
Large Language Models (LLMs) have shown capabilities close to human performance in various analytical tasks.
This paper investigates the efficiency and accuracy of LLMs in specialized tasks through a structured user study focusing on Human-LLM partnership.
arXiv Detail & Related papers (2024-10-07T02:30:18Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Large Language Models can Deliver Accurate and Interpretable Time Series Anomaly Detection [34.40206965758026]
Time series anomaly detection (TSAD) plays a crucial role in various industries by identifying atypical patterns that deviate from standard trends.
Traditional TSAD models, which often rely on deep learning, require extensive training data and operate as black boxes.
We propose LLMAD, a novel TSAD method that employs Large Language Models (LLMs) to deliver accurate and interpretable TSAD results.
arXiv Detail & Related papers (2024-05-24T09:07:02Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - PoLLMgraph: Unraveling Hallucinations in Large Language Models via State Transition Dynamics [51.17512229589]
PoLLMgraph is a model-based white-box detection and forecasting approach for large language models.
We show that hallucination can be effectively detected by analyzing the LLM's internal state transition dynamics.
Our work paves a new way for model-based white-box analysis of LLMs, motivating the research community to further explore, understand, and refine the intricate dynamics of LLM behaviors.
arXiv Detail & Related papers (2024-04-06T20:02:20Z) - Explaining Large Language Models Decisions with Shapley Values [1.223779595809275]
Large language models (LLMs) have opened up exciting possibilities for simulating human behavior and cognitive processes.
However, the validity of utilizing LLMs as stand-ins for human subjects remains uncertain.
This paper presents a novel approach based on Shapley values to interpret LLM behavior and quantify the relative contribution of each prompt component to the model's output.
arXiv Detail & Related papers (2024-03-29T22:49:43Z) - Comprehensive Reassessment of Large-Scale Evaluation Outcomes in LLMs: A Multifaceted Statistical Approach [64.42462708687921]
Evaluations have revealed that factors such as scaling, training types, architectures and other factors profoundly impact the performance of LLMs.
Our study embarks on a thorough re-examination of these LLMs, targeting the inadequacies in current evaluation methods.
This includes the application of ANOVA, Tukey HSD tests, GAMM, and clustering technique.
arXiv Detail & Related papers (2024-03-22T14:47:35Z) - Characterizing Truthfulness in Large Language Model Generations with
Local Intrinsic Dimension [63.330262740414646]
We study how to characterize and predict the truthfulness of texts generated from large language models (LLMs)
We suggest investigating internal activations and quantifying LLM's truthfulness using the local intrinsic dimension (LID) of model activations.
arXiv Detail & Related papers (2024-02-28T04:56:21Z) - Where Would I Go Next? Large Language Models as Human Mobility
Predictors [21.100313868232995]
We introduce a novel method, LLM-Mob, which leverages the language understanding and reasoning capabilities of LLMs for analysing human mobility data.
Comprehensive evaluations of our method reveal that LLM-Mob excels in providing accurate and interpretable predictions.
arXiv Detail & Related papers (2023-08-29T10:24:23Z) - Predicting Human Mobility via Self-supervised Disentanglement Learning [21.61423193132924]
We propose a novel disentangled solution called SSDL for tackling the next POI prediction problem.
We present two realistic trajectory augmentation approaches to enhance the understanding of both the human intrinsic periodicity and constantly-changing intents.
Extensive experiments conducted on four real-world datasets demonstrate that our proposed SSDL significantly outperforms the state-of-the-art approaches.
arXiv Detail & Related papers (2022-11-17T16:17:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.