Cognitive-Aligned Spatio-Temporal Large Language Models For Next Point-of-Interest Prediction
- URL: http://arxiv.org/abs/2510.14702v1
- Date: Thu, 16 Oct 2025 14:05:28 GMT
- Title: Cognitive-Aligned Spatio-Temporal Large Language Models For Next Point-of-Interest Prediction
- Authors: Penglong Zhai, Jie Li, Fanyi Di, Yue Liu, Yifang Yuan, Jie Huang, Peng Wu, Sicong Wang, Mingyang Yin, Tingting Hu, Yao Xu, Xin Li,
- Abstract summary: Large language models (LLMs) have shown great potential in recommender systems, which treat the next POI prediction in a generative manner.<n>In industrial-scale POI prediction applications, incorporating world knowledge and alignment of human cognition, such as seasons, weather conditions, holidays, and users' profiles, can enhance the user experience.<n>We propose CoAST, a framework employing natural language as an interface, next for the incorporation of world knowledge, unstructured-temporal trajectory patterns, profiles, and situational information.
- Score: 22.412601522965144
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The next point-of-interest (POI) recommendation task aims to predict the users' immediate next destinations based on their preferences and historical check-ins, holding significant value in location-based services. Recently, large language models (LLMs) have shown great potential in recommender systems, which treat the next POI prediction in a generative manner. However, these LLMs, pretrained primarily on vast corpora of unstructured text, lack the native understanding of structured geographical entities and sequential mobility patterns required for next POI prediction tasks. Moreover, in industrial-scale POI prediction applications, incorporating world knowledge and alignment of human cognition, such as seasons, weather conditions, holidays, and users' profiles (such as habits, occupation, and preferences), can enhance the user experience while improving recommendation performance. To address these issues, we propose CoAST (Cognitive-Aligned Spatial-Temporal LLMs), a framework employing natural language as an interface, allowing for the incorporation of world knowledge, spatio-temporal trajectory patterns, profiles, and situational information. Specifically, CoAST mainly comprises of 2 stages: (1) Recommendation Knowledge Acquisition through continued pretraining on the enriched spatial-temporal trajectory data of the desensitized users; (2) Cognitive Alignment to align cognitive judgments with human preferences using enriched training data through Supervised Fine-Tuning (SFT) and a subsequent Reinforcement Learning (RL) phase. Extensive offline experiments on various real-world datasets and online experiments deployed in "Guess Where You Go" of AMAP App homepage demonstrate the effectiveness of CoAST.
Related papers
- LPO: Towards Accurate GUI Agent Interaction via Location Preference Optimization [58.65395773049273]
Location Preference Optimization (LPO) is a novel approach that leverages locational data to optimize interaction preferences.<n>LPO uses information entropy to predict interaction positions by focusing on zones rich in information.<n>Our code will be made publicly available soon, at https://github.com/AIDC-AI/LPO.
arXiv Detail & Related papers (2025-06-11T03:43:30Z) - Large Language Model Empowered Recommendation Meets All-domain Continual Pre-Training [60.38082979765664]
CPRec is an All-domain Continual Pre-Training framework for Recommendation.<n>It holistically align LLMs with universal user behaviors through the continual pre-training paradigm.<n>We conduct experiments on five real-world datasets from two distinct platforms.
arXiv Detail & Related papers (2025-04-11T20:01:25Z) - A Survey of Direct Preference Optimization [103.59317151002693]
Large Language Models (LLMs) have demonstrated unprecedented generative capabilities.<n>Their alignment with human values remains critical for ensuring helpful and harmless deployments.<n>Direct Preference Optimization (DPO) has recently gained prominence as a streamlined alternative.
arXiv Detail & Related papers (2025-03-12T08:45:15Z) - LLM Post-Training: A Deep Dive into Reasoning Large Language Models [131.10969986056]
Large Language Models (LLMs) have transformed the natural language processing landscape and brought to life diverse applications.<n>Post-training methods enable LLMs to refine their knowledge, improve reasoning, enhance factual accuracy, and align more effectively with user intents and ethical considerations.
arXiv Detail & Related papers (2025-02-28T18:59:54Z) - Where to Move Next: Zero-shot Generalization of LLMs for Next POI Recommendation [28.610190512686767]
Next Point-of-interest (POI) recommendation provides valuable suggestions for users to explore their surrounding environment.
Existing studies rely on building recommendation models from large-scale users' check-in data.
Recently, the pretrained large language models (LLMs) have achieved significant advancements in various NLP tasks.
arXiv Detail & Related papers (2024-04-02T11:33:04Z) - Self-supervised Graph-based Point-of-interest Recommendation [66.58064122520747]
Next Point-of-Interest (POI) recommendation has become a prominent component in location-based e-commerce.
We propose a Self-supervised Graph-enhanced POI Recommender (S2GRec) for next POI recommendation.
In particular, we devise a novel Graph-enhanced Self-attentive layer to incorporate the collaborative signals from both global transition graph and local trajectory graphs.
arXiv Detail & Related papers (2022-10-22T17:29:34Z) - Exploiting Bi-directional Global Transition Patterns and Personal
Preferences for Missing POI Category Identification [37.025295828186955]
We propose a novel neural network approach to identify the missing POI categories.
Specifically, we design an attention matching cell to model how well the check-in category information matches their non-personal transition patterns and personal preferences.
Our model can be naturally extended to address next POI category recommendation and prediction tasks with competitive performance.
arXiv Detail & Related papers (2021-12-31T04:15:37Z) - Modelling of Bi-directional Spatio-Temporal Dependence and Users'
Dynamic Preferences for Missing POI Check-in Identification [38.51964956686177]
We develop a model, named Bi-STDDP, which can integrate bi-directional-temporal dependence and users' dynamic preferences.
Results demonstrate significant improvements of our model compared with state-of-the-art methods.
arXiv Detail & Related papers (2021-12-31T03:54:37Z) - Unified Instance and Knowledge Alignment Pretraining for Aspect-based
Sentiment Analysis [96.53859361560505]
Aspect-based Sentiment Analysis (ABSA) aims to determine the sentiment polarity towards an aspect.
There always exists severe domain shift between the pretraining and downstream ABSA datasets.
We introduce a unified alignment pretraining framework into the vanilla pretrain-finetune pipeline.
arXiv Detail & Related papers (2021-10-26T04:03:45Z) - Joint Geographical and Temporal Modeling based on Matrix Factorization
for Point-of-Interest Recommendation [6.346772579930929]
Point-of-Interest (POI) recommendation has become an important task, which learns the users' preferences and mobility patterns to recommend POIs.
Previous studies show that incorporating contextual information such as geographical and temporal influences is necessary to improve POI recommendation.
arXiv Detail & Related papers (2020-01-24T12:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.