Large Language Models for Intent-Driven Session Recommendations
- URL: http://arxiv.org/abs/2312.07552v1
- Date: Thu, 7 Dec 2023 02:25:14 GMT
- Title: Large Language Models for Intent-Driven Session Recommendations
- Authors: Zhu Sun, Hongyang Liu, Xinghua Qu, Kaidong Feng, Yan Wang, Yew-Soon
Ong
- Abstract summary: We introduce a novel ISR approach, utilizing the advanced reasoning capabilities of large language models (LLMs)
We introduce an innovative prompt optimization mechanism that iteratively self-reflects and adjusts prompts.
This new paradigm empowers LLMs to discern diverse user intents at a semantic level, leading to more accurate and interpretable session recommendations.
- Score: 34.64421003286209
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Intent-aware session recommendation (ISR) is pivotal in discerning user
intents within sessions for precise predictions. Traditional approaches,
however, face limitations due to their presumption of a uniform number of
intents across all sessions. This assumption overlooks the dynamic nature of
user sessions, where the number and type of intentions can significantly vary.
In addition, these methods typically operate in latent spaces, thus hinder the
model's transparency.Addressing these challenges, we introduce a novel ISR
approach, utilizing the advanced reasoning capabilities of large language
models (LLMs). First, this approach begins by generating an initial prompt that
guides LLMs to predict the next item in a session, based on the varied intents
manifested in user sessions. Then, to refine this process, we introduce an
innovative prompt optimization mechanism that iteratively self-reflects and
adjusts prompts. Furthermore, our prompt selection module, built upon the LLMs'
broad adaptability, swiftly selects the most optimized prompts across diverse
domains. This new paradigm empowers LLMs to discern diverse user intents at a
semantic level, leading to more accurate and interpretable session
recommendations. Our extensive experiments on three real-world datasets
demonstrate the effectiveness of our method, marking a significant advancement
in ISR systems.
Related papers
- Large Language Model Empowered Recommendation Meets All-domain Continual Pre-Training [60.38082979765664]
CPRec is an All-domain Continual Pre-Training framework for Recommendation.
It holistically align LLMs with universal user behaviors through the continual pre-training paradigm.
We conduct experiments on five real-world datasets from two distinct platforms.
arXiv Detail & Related papers (2025-04-11T20:01:25Z) - User Feedback Alignment for LLM-powered Exploration in Large-scale Recommendation Systems [26.652050105571206]
Exploration, the act of broadening user experiences beyond their established preferences, is challenging in large-scale recommendation systems.
This paper introduces a novel approach combining hierarchical planning with LLM inference-time scaling to improve recommendation relevancy without compromising novelty.
arXiv Detail & Related papers (2025-04-07T21:44:12Z) - Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.
We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Few-shot Steerable Alignment: Adapting Rewards and LLM Policies with Neural Processes [50.544186914115045]
Large language models (LLMs) are increasingly embedded in everyday applications.
Ensuring their alignment with the diverse preferences of individual users has become a critical challenge.
We present a novel framework for few-shot steerable alignment.
arXiv Detail & Related papers (2024-12-18T16:14:59Z) - LIBER: Lifelong User Behavior Modeling Based on Large Language Models [42.045535303737694]
We propose Lifelong User Behavior Modeling (LIBER) based on large language models.
LIBER has been deployed on Huawei's music recommendation service and achieved substantial improvements in users' play count and play time by 3.01% and 7.69%.
arXiv Detail & Related papers (2024-11-22T03:43:41Z) - MetaAlign: Align Large Language Models with Diverse Preferences during Inference Time [50.41806216615488]
Large Language Models (LLMs) acquire extensive knowledge and remarkable abilities from extensive text corpora.
To make LLMs more usable, aligning them with human preferences is essential.
We propose an effective method, textbf MetaAlign, which aims to help LLMs dynamically align with various explicit or implicit preferences specified at inference time.
arXiv Detail & Related papers (2024-10-18T05:31:13Z) - Aligning LLMs with Individual Preferences via Interaction [51.72200436159636]
We train large language models (LLMs) that can ''interact to align''
We develop a multi-turn preference dataset containing 3K+ multi-turn conversations in tree structures.
For evaluation, we establish the ALOE benchmark, consisting of 100 carefully selected examples and well-designed metrics to measure the customized alignment performance during conversations.
arXiv Detail & Related papers (2024-10-04T17:48:29Z) - Adaptive Self-Supervised Learning Strategies for Dynamic On-Device LLM Personalization [3.1944843830667766]
Large language models (LLMs) have revolutionized how we interact with technology, but their personalization to individual user preferences remains a significant challenge.
We present Adaptive Self-Supervised Learning Strategies (ASLS), which utilize self-supervised learning techniques to personalize LLMs dynamically.
arXiv Detail & Related papers (2024-09-25T14:35:06Z) - Harnessing Multimodal Large Language Models for Multimodal Sequential Recommendation [21.281471662696372]
We propose the Multimodal Large Language Model-enhanced Multimodaln Sequential Recommendation (MLLM-MSR) model.
To capture the dynamic user preference, we design a two-stage user preference summarization method.
We then employ a recurrent user preference summarization generation paradigm to capture the dynamic changes in user preferences.
arXiv Detail & Related papers (2024-08-19T04:44:32Z) - GANPrompt: Enhancing Robustness in LLM-Based Recommendations with GAN-Enhanced Diversity Prompts [15.920623515602038]
This paper proposes GANPrompt, a multi-dimensional large language model prompt diversity framework based on Generative Adversarial Networks (GANs)
GANPrompt first trains a generator capable of producing diverse prompts by analysing multidimensional user behavioural data.
These diverse prompts are then used to train the LLM to improve its performance in the face of unseen prompts.
arXiv Detail & Related papers (2024-08-19T03:13:20Z) - MMREC: LLM Based Multi-Modal Recommender System [2.3113916776957635]
This paper presents a novel approach to enhancing recommender systems by leveraging Large Language Models (LLMs) and deep learning techniques.
The proposed framework aims to improve the accuracy and relevance of recommendations by incorporating multi-modal information processing and by the use of unified latent space representation.
arXiv Detail & Related papers (2024-08-08T04:31:29Z) - LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation [58.04939553630209]
In real-world systems, most users interact with only a handful of items, while the majority of items are seldom consumed.
These two issues, known as the long-tail user and long-tail item challenges, often pose difficulties for existing Sequential Recommendation systems.
We propose the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR) to address these challenges.
arXiv Detail & Related papers (2024-05-31T07:24:42Z) - One Token Can Help! Learning Scalable and Pluggable Virtual Tokens for Retrieval-Augmented Large Language Models [67.49462724595445]
Retrieval-augmented generation (RAG) is a promising way to improve large language models (LLMs)
We propose a novel method that involves learning scalable and pluggable virtual tokens for RAG.
arXiv Detail & Related papers (2024-05-30T03:44:54Z) - Multi-view Intent Learning and Alignment with Large Language Models for Session-based Recommendation [26.58882747016846]
Session-based recommendation (SBR) methods often rely on user behavior data, which can struggle with the sparsity of session data, limiting performance.
We propose an LLM-enhanced SBR framework that integrates semantic and behavioral signals from multiple views.
In the first stage, we use multi-view prompts to infer latent user intentions at the session semantic level, supported by an intent localization module to alleviate hallucinations.
In the second stage, we align and unify these semantic inferences with behavioral representations, effectively merging insights from both large and small models.
arXiv Detail & Related papers (2024-02-21T14:38:02Z) - Relative Preference Optimization: Enhancing LLM Alignment through Contrasting Responses across Identical and Diverse Prompts [95.09994361995389]
Relative Preference Optimization (RPO) is designed to discern between more and less preferred responses derived from both identical and related prompts.
RPO has demonstrated a superior ability to align large language models with user preferences and to improve their adaptability during the training process.
arXiv Detail & Related papers (2024-02-12T22:47:57Z) - Adaptive In-Context Learning with Large Language Models for Bundle Generation [31.667010709144773]
This paper explores two interrelated tasks, i.e., personalized bundle generation and the underlying intent inference, based on different user sessions.
Inspired by the reasoning capabilities of large language models (LLMs), we propose an adaptive in-context learning paradigm.
Experiments on three real-world datasets demonstrate the effectiveness of our proposed method.
arXiv Detail & Related papers (2023-12-26T08:24:24Z) - Intent Contrastive Learning for Sequential Recommendation [86.54439927038968]
We introduce a latent variable to represent users' intents and learn the distribution function of the latent variable via clustering.
We propose to leverage the learned intents into SR models via contrastive SSL, which maximizes the agreement between a view of sequence and its corresponding intent.
Experiments conducted on four real-world datasets demonstrate the superiority of the proposed learning paradigm.
arXiv Detail & Related papers (2022-02-05T09:24:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.