Trajectory Prediction Meets Large Language Models: A Survey
- URL: http://arxiv.org/abs/2506.03408v1
- Date: Tue, 03 Jun 2025 21:36:56 GMT
- Title: Trajectory Prediction Meets Large Language Models: A Survey
- Authors: Yi Xu, Ruining Yang, Yitian Zhang, Yizhou Wang, Jianglin Lu, Mingyuan Zhang, Lili Su, Yun Fu,
- Abstract summary: Recent advances in large language models (LLMs) have sparked growing interest in integrating language-driven techniques into trajectory prediction.<n>This survey provides a comprehensive overview of this emerging field, categorizing recent work into five directions.
- Score: 55.70506060739684
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in large language models (LLMs) have sparked growing interest in integrating language-driven techniques into trajectory prediction. By leveraging their semantic and reasoning capabilities, LLMs are reshaping how autonomous systems perceive, model, and predict trajectories. This survey provides a comprehensive overview of this emerging field, categorizing recent work into five directions: (1) Trajectory prediction via language modeling paradigms, (2) Direct trajectory prediction with pretrained language models, (3) Language-guided scene understanding for trajectory prediction, (4) Language-driven data generation for trajectory prediction, (5) Language-based reasoning and interpretability for trajectory prediction. For each, we analyze representative methods, highlight core design choices, and identify open challenges. This survey bridges natural language processing and trajectory prediction, offering a unified perspective on how language can enrich trajectory prediction.
Related papers
- Enhanced Urdu Intent Detection with Large Language Models and Prototype-Informed Predictive Pipelines [5.191443390565865]
This paper introduces a unique contrastive learning approach that leverages unlabeled Urdu data to re-train pre-trained language models.<n>It reaps the combined potential of pre-trained LLMs and the prototype-informed attention mechanism to create an end-to-end intent detection pipeline.<n>Under the paradigm of proposed predictive pipeline, it explores the potential of 6 distinct language models and 13 distinct similarity computation methods.
arXiv Detail & Related papers (2025-05-08T08:38:40Z) - TrajEvo: Designing Trajectory Prediction Heuristics via LLM-driven Evolution [19.607695535560566]
Trajectory prediction is a crucial task in modeling human behavior.<n>In this paper, we introduce TrajEvo, a framework that leverages Large Language Models (LLMs) to automatically design trajectory predictions.
arXiv Detail & Related papers (2025-05-07T14:51:43Z) - Probing via Prompting [71.7904179689271]
This paper introduces a novel model-free approach to probing, by formulating probing as a prompting task.
We conduct experiments on five probing tasks and show that our approach is comparable or better at extracting information than diagnostic probes.
We then examine the usefulness of a specific linguistic property for pre-training by removing the heads that are essential to that property and evaluating the resulting model's performance on language modeling.
arXiv Detail & Related papers (2022-07-04T22:14:40Z) - Few-shot Subgoal Planning with Language Models [58.11102061150875]
We show that language priors encoded in pre-trained language models allow us to infer fine-grained subgoal sequences.
In contrast to recent methods which make strong assumptions about subgoal supervision, our experiments show that language models can infer detailed subgoal sequences without any fine-tuning.
arXiv Detail & Related papers (2022-05-28T01:03:30Z) - From Good to Best: Two-Stage Training for Cross-lingual Machine Reading
Comprehension [51.953428342923885]
We develop a two-stage approach to enhance the model performance.
The first stage targets at recall: we design a hard-learning (HL) algorithm to maximize the likelihood that the top-k predictions contain the accurate answer.
The second stage focuses on precision: an answer-aware contrastive learning mechanism is developed to learn the fine difference between the accurate answer and other candidates.
arXiv Detail & Related papers (2021-12-09T07:31:15Z) - Trajectory Prediction with Linguistic Representations [27.71805777845141]
We present a novel trajectory prediction model that uses linguistic intermediate representations to forecast trajectories.
The model learns the meaning of each of the words without direct per-word supervision.
It generates a linguistic description of trajectories which captures maneuvers and interactions over an extended time interval.
arXiv Detail & Related papers (2021-10-19T05:22:38Z) - Language Models are Few-shot Multilingual Learners [66.11011385895195]
We evaluate the multilingual skills of the GPT and T5 models in conducting multi-class classification on non-English languages.
We show that, given a few English examples as context, pre-trained language models can predict not only English test samples but also non-English ones.
arXiv Detail & Related papers (2021-09-16T03:08:22Z) - Learning to Predict Vehicle Trajectories with Model-based Planning [43.27767693429292]
We introduce a novel framework called PRIME, which stands for Prediction with Model-based Planning.
Unlike recent prediction works that utilize neural networks to model scene context, PRIME is designed to generate accurate and feasibility-guaranteed future trajectory predictions.
Our PRIME outperforms state-of-the-art methods in prediction accuracy, feasibility, and robustness under imperfect tracking.
arXiv Detail & Related papers (2021-03-06T04:49:24Z) - Adversarial Generative Grammars for Human Activity Prediction [141.43526239537502]
We propose an adversarial generative grammar model for future prediction.
Our grammar is designed so that it can learn production rules from the data distribution.
Being able to select multiple production rules during inference leads to different predicted outcomes.
arXiv Detail & Related papers (2020-08-11T17:47:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.