Unleashing the Power of Large Language Models for Group POI Recommendations
- URL: http://arxiv.org/abs/2411.13415v1
- Date: Wed, 20 Nov 2024 16:02:14 GMT
- Title: Unleashing the Power of Large Language Models for Group POI Recommendations
- Authors: Jing Long, Liang Qu, Guanhua Ye, Tong Chen, Quoc Viet Hung Nguyen, Hongzhi Yin,
- Abstract summary: Group Point-of-Interest (POI) recommendations aim to predict the next POI that satisfies the diverse preferences of a group of users.
Existing methods for group POI recommendations rely on single ID-based features from check-in data.
We propose a framework that unleashes power of the Large Language Model (LLM) for context-aware group POI recommendations.
- Score: 39.49785677738477
- License:
- Abstract: Group Point-of-Interest (POI) recommendations aim to predict the next POI that satisfies the diverse preferences of a group of users. This task is more challenging than traditional individual POI recommendations due to complex group decision-making and extremely sparse group-level check-in data. Existing methods for group POI recommendations primarily rely on single ID-based features from check-in data, capturing only statistical correlations and failing to fully utilize the rich semantic information contained in the check-ins, resulting in suboptimal performance. To this end, we propose a framework that unleashes the power of the Large Language Model (LLM) for context-aware group POI recommendations (LLMGPR). Our approach first introduces POI tokens alongside the original word tokens of the LLM, which are initialized by applying the LLM to the rich information of each POI. We then propose a novel sequencing adapter guided by Quantized Low-Rank Adaptation (QLORA) to modify the LLM. The enhanced LLM can learn sequence representations by combining semantic-enhanced POI tokens and rich contextual information including positional encodings and spatio-temporal differences. This approach can be adapted for learning either group or user representations depending on the sequence type. Furthermore, we enhance group representations by aggregating individual member representations with another QLORA-based aggregation adapter and introducing a self-supervised learning task that predicts the purpose of check-in sequences, alleviating the data sparsity issue. Our experimental results demonstrate that LLMGPR outperforms existing methods, effectively addressing group-level data sparsity and providing superior recommendations.
Related papers
- Self-Calibrated Listwise Reranking with Large Language Models [137.6557607279876]
Large language models (LLMs) have been employed in reranking tasks through a sequence-to-sequence approach.
This reranking paradigm requires a sliding window strategy to iteratively handle larger candidate sets.
We propose a novel self-calibrated listwise reranking method, which aims to leverage LLMs to produce global relevance scores for ranking.
arXiv Detail & Related papers (2024-11-07T10:31:31Z) - Token-level Proximal Policy Optimization for Query Generation [45.81132350185301]
State-of-the-art query generation methods leverage Large Language Models (LLMs) for their strong capabilities in context understanding and text generation.
We propose Token-level Proximal Policy Optimization (TPPO), a noval approach designed to empower LLMs perform better in query generation through fine-tuning.
TPPO is based on the Reinforcement Learning from AI Feedback (RLAIF) paradigm, consisting of a token-level reward model and a token-level proximal policy optimization module.
arXiv Detail & Related papers (2024-11-01T16:36:14Z) - GenCRF: Generative Clustering and Reformulation Framework for Enhanced Intent-Driven Information Retrieval [20.807374287510623]
We propose GenCRF: a Generative Clustering and Reformulation Framework to capture diverse intentions adaptively.
We show that GenCRF achieves state-of-the-art performance, surpassing previous query reformulation SOTAs by up to 12% on nDCG@10.
arXiv Detail & Related papers (2024-09-17T05:59:32Z) - Beyond Inter-Item Relations: Dynamic Adaption for Enhancing LLM-Based Sequential Recommendation [83.87767101732351]
Sequential recommender systems (SRS) predict the next items that users may prefer based on user historical interaction sequences.
Inspired by the rise of large language models (LLMs) in various AI applications, there is a surge of work on LLM-based SRS.
We propose DARec, a sequential recommendation model built on top of coarse-grained adaption for capturing inter-item relations.
arXiv Detail & Related papers (2024-08-14T10:03:40Z) - Reindex-Then-Adapt: Improving Large Language Models for Conversational Recommendation [50.19602159938368]
Large language models (LLMs) are revolutionizing conversational recommender systems.
We propose a Reindex-Then-Adapt (RTA) framework, which converts multi-token item titles into single tokens within LLMs.
Our framework demonstrates improved accuracy metrics across three different conversational recommendation datasets.
arXiv Detail & Related papers (2024-05-20T15:37:55Z) - P-ICL: Point In-Context Learning for Named Entity Recognition with Large Language Models [7.037794031385439]
In recent years, the rise of large language models (LLMs) has made it possible to directly achieve named entity recognition (NER) without any demonstration samples.
Standard ICL only helps LLMs understand task instructions, format and input-label mapping, but neglects the particularity of the NER task itself.
We propose a new prompting framework P-ICL to better achieve NER with LLMs, in which some point entities are leveraged as the auxiliary information to recognize each entity type.
arXiv Detail & Related papers (2024-05-08T11:01:21Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Group Preference Optimization: Few-Shot Alignment of Large Language Models [28.464834028110538]
Group Preference Optimization steers language models to preferences of individual groups in a few-shot manner.
We empirically validate the efficacy of GPO through rigorous evaluations using large language models with varied sizes.
Our results demonstrate that GPO not only aligns models more accurately but also requires fewer group-specific preferences, and less training and inference computing resources.
arXiv Detail & Related papers (2023-10-17T18:41:57Z) - Guiding Large Language Models via Directional Stimulus Prompting [114.84930073977672]
We introduce Directional Stimulus Prompting, a novel framework for guiding black-box large language models (LLMs) toward specific desired outputs.
Instead of directly adjusting LLMs, our method employs a small tunable policy model to generate an auxiliary directional stimulus prompt for each input instance.
arXiv Detail & Related papers (2023-02-22T17:44:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.