Beyond Retrieval: Generating Narratives in Conversational Recommender Systems
- URL: http://arxiv.org/abs/2410.16780v1
- Date: Tue, 22 Oct 2024 07:53:41 GMT
- Title: Beyond Retrieval: Generating Narratives in Conversational Recommender Systems
- Authors: Krishna Sayana, Raghavendra Vasudeva, Yuri Vasilevski, Kun Su, Liam Hebert, Hubert Pham, Ambarish Jash, Sukhdeep Sodhi,
- Abstract summary: We introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations.
We establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM.
And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives.
- Score: 4.912663905306209
- License:
- Abstract: The recent advances in Large Language Model's generation and reasoning capabilities present an opportunity to develop truly conversational recommendation systems. However, effectively integrating recommender system knowledge into LLMs for natural language generation which is tailored towards recommendation tasks remains a challenge. This paper addresses this challenge by making two key contributions. First, we introduce a new dataset (REGEN) for natural language generation tasks in conversational recommendations. REGEN (Reviews Enhanced with GEnerative Narratives) extends the Amazon Product Reviews dataset with rich user narratives, including personalized explanations of product preferences, product endorsements for recommended items, and summaries of user purchase history. REGEN is made publicly available to facilitate further research. Furthermore, we establish benchmarks using well-known generative metrics, and perform an automated evaluation of the new dataset using a rater LLM. Second, the paper introduces a fusion architecture (CF model with an LLM) which serves as a baseline for REGEN. And to the best of our knowledge, represents the first attempt to analyze the capabilities of LLMs in understanding recommender signals and generating rich narratives. We demonstrate that LLMs can effectively learn from simple fusion architectures utilizing interaction-based CF embeddings, and this can be further enhanced using the metadata and personalization data associated with items. Our experiments show that combining CF and content embeddings leads to improvements of 4-12% in key language metrics compared to using either type of embedding individually. We also provide an analysis to interpret how CF and content embeddings contribute to this new generative task.
Related papers
- Collaborative Retrieval for Large Language Model-based Conversational Recommender Systems [65.75265303064654]
Conversational recommender systems (CRS) aim to provide personalized recommendations via interactive dialogues with users.
Large language models (LLMs) enhance CRS with their superior understanding of context-aware user preferences.
We propose CRAG, Collaborative Retrieval Augmented Generation for LLM-based CRS.
arXiv Detail & Related papers (2025-02-19T22:47:40Z) - RALLRec: Improving Retrieval Augmented Large Language Model Recommendation with Representation Learning [24.28601381739682]
Large Language Models (LLMs) have been integrated into recommendation systems to enhance user behavior comprehension.
Existing RAG methods rely primarily on textual semantics and often fail to incorporate the most relevant items.
We propose Representation learning for retrieval-Augmented Large Language model Recommendation (RALLRec)
arXiv Detail & Related papers (2025-02-10T02:15:12Z) - Towards a Unified Paradigm: Integrating Recommendation Systems as a New Language in Large Models [33.02146794292383]
We introduce a new concept, "Integrating Recommendation Systems as a New Language in Large Models" (RSLLM)
RSLLM uses a unique prompting method that combines ID-based item embeddings from conventional recommendation models with textual item features.
It treats users' sequential behaviors as a distinct language and aligns the ID embeddings with the LLM's input space using a projector.
arXiv Detail & Related papers (2024-12-22T09:08:46Z) - Personalized News Recommendation System via LLM Embedding and Co-Occurrence Patterns [6.4561443264763625]
In news recommendation (NR), systems must comprehend and process a vast amount of clicked news text to infer the probability of candidate news clicks.
In this paper, we propose a novel NR algorithm to reshape the news model via LLM Embedding and Co-Occurrence Pattern (LECOP)
Extensive experiments demonstrate the superior performance of our proposed novel method.
arXiv Detail & Related papers (2024-11-09T03:01:49Z) - A Prompting-Based Representation Learning Method for Recommendation with Large Language Models [2.1161973970603998]
We introduce the Prompting-Based Representation Learning Method for Recommendation (P4R) to boost the linguistic abilities of Large Language Models (LLMs) in Recommender Systems.
In our P4R framework, we utilize the LLM prompting strategy to create personalized item profiles.
In our evaluation, we compare P4R with state-of-the-art Recommender models and assess the quality of prompt-based profile generation.
arXiv Detail & Related papers (2024-09-25T07:06:14Z) - Beyond Inter-Item Relations: Dynamic Adaption for Enhancing LLM-Based Sequential Recommendation [83.87767101732351]
Sequential recommender systems (SRS) predict the next items that users may prefer based on user historical interaction sequences.
Inspired by the rise of large language models (LLMs) in various AI applications, there is a surge of work on LLM-based SRS.
We propose DARec, a sequential recommendation model built on top of coarse-grained adaption for capturing inter-item relations.
arXiv Detail & Related papers (2024-08-14T10:03:40Z) - Large Language Models for Data Annotation and Synthesis: A Survey [49.8318827245266]
This survey focuses on the utility of Large Language Models for data annotation and synthesis.
It includes an in-depth taxonomy of data types that LLMs can annotate, a review of learning strategies for models utilizing LLM-generated annotations, and a detailed discussion of the primary challenges and limitations associated with using LLMs for data annotation and synthesis.
arXiv Detail & Related papers (2024-02-21T00:44:04Z) - CoLLM: Integrating Collaborative Embeddings into Large Language Models for Recommendation [60.2700801392527]
We introduce CoLLM, an innovative LLMRec methodology that seamlessly incorporates collaborative information into LLMs for recommendation.
CoLLM captures collaborative information through an external traditional model and maps it to the input token embedding space of LLM.
Extensive experiments validate that CoLLM adeptly integrates collaborative information into LLMs, resulting in enhanced recommendation performance.
arXiv Detail & Related papers (2023-10-30T12:25:00Z) - GenRec: Large Language Model for Generative Recommendation [41.22833600362077]
This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data.
GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation.
Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems.
arXiv Detail & Related papers (2023-07-02T02:37:07Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.