Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item
- URL: http://arxiv.org/abs/2405.20646v1
- Date: Fri, 31 May 2024 07:24:42 GMT
- Title: Large Language Models Enhanced Sequential Recommendation for Long-tail User and Item
- Authors: Qidong Liu, Xian Wu, Xiangyu Zhao, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng,
- Abstract summary: The emergence of large language models (LLMs) presents a promising avenue to address these challenges from a semantic standpoint.
In this study, we introduce the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR)
Our proposed enhancement framework demonstrates superior performance compared to existing methodologies.
- Score: 58.04939553630209
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sequential recommendation systems (SRS) serve the purpose of predicting users' subsequent preferences based on their past interactions and have been applied across various domains such as e-commerce and social networking platforms. However, practical SRS encounters challenges due to the fact that most users engage with only a limited number of items, while the majority of items are seldom consumed. These challenges, termed as the long-tail user and long-tail item dilemmas, often create obstacles for traditional SRS methods. Mitigating these challenges is crucial as they can significantly impact user satisfaction and business profitability. While some research endeavors have alleviated these issues, they still grapple with issues such as seesaw or noise stemming from the scarcity of interactions. The emergence of large language models (LLMs) presents a promising avenue to address these challenges from a semantic standpoint. In this study, we introduce the Large Language Models Enhancement framework for Sequential Recommendation (LLM-ESR), which leverages semantic embeddings from LLMs to enhance SRS performance without increasing computational overhead. To combat the long-tail item challenge, we propose a dual-view modeling approach that fuses semantic information from LLMs with collaborative signals from traditional SRS. To address the long-tail user challenge, we introduce a retrieval augmented self-distillation technique to refine user preference representations by incorporating richer interaction data from similar users. Through comprehensive experiments conducted on three authentic datasets using three widely used SRS models, our proposed enhancement framework demonstrates superior performance compared to existing methodologies.
Related papers
- CoRAL: Collaborative Retrieval-Augmented Large Language Models Improve
Long-tail Recommendation [34.29410946387975]
We introduce collaborative retrieval-augmented LLMs, CoRAL, which directly incorporate collaborative evidence into prompts.
LLMs can analyze shared and distinct preferences among users, and summarize the patterns indicating which types of users would be attracted by certain items.
Our experimental results show that CoRAL can significantly improve LLMs' reasoning abilities on specific recommendation tasks.
arXiv Detail & Related papers (2024-03-11T05:49:34Z) - Large Language Models for Intent-Driven Session Recommendations [34.64421003286209]
We introduce a novel ISR approach, utilizing the advanced reasoning capabilities of large language models (LLMs)
We introduce an innovative prompt optimization mechanism that iteratively self-reflects and adjusts prompts.
This new paradigm empowers LLMs to discern diverse user intents at a semantic level, leading to more accurate and interpretable session recommendations.
arXiv Detail & Related papers (2023-12-07T02:25:14Z) - Alleviating the Long-Tail Problem in Conversational Recommender Systems [72.8984755843184]
Conversational recommender systems (CRS) aim to provide the recommendation service via natural language conversations.
Existing CRS datasets suffer from the long-tail issue, ie a large proportion of items are rarely (or even never) mentioned in the conversations.
This paper presents textbfLOT-CRS, a novel framework that focuses on simulating and utilizing a balanced CRS dataset.
arXiv Detail & Related papers (2023-07-21T15:28:47Z) - Unlocking the Potential of User Feedback: Leveraging Large Language
Model as User Simulator to Enhance Dialogue System [65.93577256431125]
We propose an alternative approach called User-Guided Response Optimization (UGRO) to combine it with a smaller task-oriented dialogue model.
This approach uses LLM as annotation-free user simulator to assess dialogue responses, combining them with smaller fine-tuned end-to-end TOD models.
Our approach outperforms previous state-of-the-art (SOTA) results.
arXiv Detail & Related papers (2023-06-16T13:04:56Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z) - MELT: Mutual Enhancement of Long-Tailed User and Item for Sequential
Recommendation [8.751117923894435]
The long-tailed problem is a long-standing challenge in Sequential Recommender Systems (SRS)
We propose a novel framework for SRS, called Mutual Enhancement of Long-Tailed user and item (MELT)
MELT jointly alleviates the long-tailed problem in the perspectives of both users and items.
arXiv Detail & Related papers (2023-04-17T15:49:34Z) - Sequential Search with Off-Policy Reinforcement Learning [48.88165680363482]
We propose a highly scalable hybrid learning model that consists of an RNN learning framework and an attention model.
As a novel optimization step, we fit multiple short user sequences in a single RNN pass within a training batch, by solving a greedy knapsack problem on the fly.
We also explore the use of off-policy reinforcement learning in multi-session personalized search ranking.
arXiv Detail & Related papers (2022-02-01T06:52:40Z) - Recommender Systems Based on Generative Adversarial Networks: A
Problem-Driven Perspective [27.11589218811911]
generative adversarial networks (GANs) have garnered increased interest in many fields, owing to their strong capacity to learn complex real data distributions.
In this paper, we propose a taxonomy of these models, along with their detailed descriptions and advantages.
arXiv Detail & Related papers (2020-03-05T08:05:38Z) - Sequential Recommender Systems: Challenges, Progress and Prospects [50.12218578518894]
sequential recommender systems (SRSs) try to understand and model the sequential user behaviors, the interactions between users and items, and the evolution of users preferences and item popularity over time.
We first present the characteristics of SRSs, then summarize and categorize the key challenges in this research area, followed by the corresponding research progress consisting of the most recent and representative developments on this topic.
arXiv Detail & Related papers (2019-12-28T05:12:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.