RecPrompt: A Self-tuning Prompting Framework for News Recommendation Using Large Language Models
- URL: http://arxiv.org/abs/2312.10463v4
- Date: Tue, 22 Oct 2024 12:40:25 GMT
- Title: RecPrompt: A Self-tuning Prompting Framework for News Recommendation Using Large Language Models
- Authors: Dairui Liu, Boming Yang, Honghui Du, Derek Greene, Neil Hurley, Aonghus Lawlor, Ruihai Dong, Irene Li,
- Abstract summary: We introduce RecPrompt, the first self-tuning prompting framework for news recommendation.
We also introduce TopicScore, a novel metric to assess explainability.
- Score: 12.28603831152324
- License:
- Abstract: News recommendations heavily rely on Natural Language Processing (NLP) methods to analyze, understand, and categorize content, enabling personalized suggestions based on user interests and reading behaviors. Large Language Models (LLMs) like GPT-4 have shown promising performance in understanding natural language. However, the extent of their applicability to news recommendation systems remains to be validated. This paper introduces RecPrompt, the first self-tuning prompting framework for news recommendation, leveraging the capabilities of LLMs to perform complex news recommendation tasks. This framework incorporates a news recommender and a prompt optimizer that applies an iterative bootstrapping process to enhance recommendations through automatic prompt engineering. Extensive experimental results with 400 users show that RecPrompt can achieve an improvement of 3.36% in AUC, 10.49% in MRR, 9.64% in nDCG@5, and 6.20% in nDCG@10 compared to deep neural models. Additionally, we introduce TopicScore, a novel metric to assess explainability by evaluating LLM's ability to summarize topics of interest for users. The results show LLM's effectiveness in accurately identifying topics of interest and delivering comprehensive topic-based explanations.
Related papers
- A Prompting-Based Representation Learning Method for Recommendation with Large Language Models [2.1161973970603998]
We introduce the Prompting-Based Representation Learning Method for Recommendation (P4R) to boost the linguistic abilities of Large Language Models (LLMs) in Recommender Systems.
In our P4R framework, we utilize the LLM prompting strategy to create personalized item profiles.
In our evaluation, we compare P4R with state-of-the-art Recommender models and assess the quality of prompt-based profile generation.
arXiv Detail & Related papers (2024-09-25T07:06:14Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - CherryRec: Enhancing News Recommendation Quality via LLM-driven Framework [4.4206696279087]
We propose a framework for news recommendation using Large Language Models (LLMs) named textitCherryRec.
CherryRec ensures the quality of recommendations while accelerating the recommendation process.
We validate the effectiveness of the proposed framework by comparing it with state-of-the-art baseline methods on benchmark datasets.
arXiv Detail & Related papers (2024-06-18T03:33:38Z) - Supportiveness-based Knowledge Rewriting for Retrieval-augmented Language Modeling [65.72918416258219]
Supportiveness-based Knowledge Rewriting (SKR) is a robust and pluggable knowledge rewriter inherently optimized for LLM generation.
Based on knowledge supportiveness, we first design a training data curation strategy for our rewriter model.
We then introduce the direct preference optimization (DPO) algorithm to align the generated rewrites to optimal supportiveness.
arXiv Detail & Related papers (2024-06-12T11:52:35Z) - News Recommendation with Category Description by a Large Language Model [1.6267479602370543]
News categories, such as tv-golden-globe, finance-real-estate, and news-politics, play an important role in understanding news content.
We propose a novel method that automatically generates informative category descriptions using a large language model.
Our method successfully achieved 5.8% improvement at most in AUC compared with baseline approaches.
arXiv Detail & Related papers (2024-05-13T08:53:43Z) - Uncertainty-Aware Explainable Recommendation with Large Language Models [15.229417987212631]
We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
arXiv Detail & Related papers (2024-01-31T14:06:26Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Rethinking the Evaluation for Conversational Recommendation in the Era
of Large Language Models [115.7508325840751]
The recent success of large language models (LLMs) has shown great potential to develop more powerful conversational recommender systems (CRSs)
In this paper, we embark on an investigation into the utilization of ChatGPT for conversational recommendation, revealing the inadequacy of the existing evaluation protocol.
We propose an interactive Evaluation approach based on LLMs named iEvaLM that harnesses LLM-based user simulators.
arXiv Detail & Related papers (2023-05-22T15:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.