Uncertainty-Aware Explainable Recommendation with Large Language Models
- URL: http://arxiv.org/abs/2402.03366v1
- Date: Wed, 31 Jan 2024 14:06:26 GMT
- Title: Uncertainty-Aware Explainable Recommendation with Large Language Models
- Authors: Yicui Peng, Hao Chen, Chingsheng Lin, Guo Huang, Jinrong Hu, Hui Guo,
Bin Kong, Shu Hu, Xi Wu, and Xin Wang
- Abstract summary: We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
- Score: 15.229417987212631
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Providing explanations within the recommendation system would boost user
satisfaction and foster trust, especially by elaborating on the reasons for
selecting recommended items tailored to the user. The predominant approach in
this domain revolves around generating text-based explanations, with a notable
emphasis on applying large language models (LLMs). However, refining LLMs for
explainable recommendations proves impractical due to time constraints and
computing resource limitations. As an alternative, the current approach
involves training the prompt rather than the LLM. In this study, we developed a
model that utilizes the ID vectors of user and item inputs as prompts for
GPT-2. We employed a joint training mechanism within a multi-task learning
framework to optimize both the recommendation task and explanation task. This
strategy enables a more effective exploration of users' interests, improving
recommendation effectiveness and user satisfaction. Through the experiments,
our method achieving 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor
and Amazon dataset respectively, demonstrates superior performance over four
SOTA methods in terms of explainability evaluation metric. In addition, we
identified that the proposed model is able to ensure stable textual quality on
the three public datasets.
Related papers
- A Prompting-Based Representation Learning Method for Recommendation with Large Language Models [2.1161973970603998]
We introduce the Prompting-Based Representation Learning Method for Recommendation (P4R) to boost the linguistic abilities of Large Language Models (LLMs) in Recommender Systems.
In our P4R framework, we utilize the LLM prompting strategy to create personalized item profiles.
In our evaluation, we compare P4R with state-of-the-art Recommender models and assess the quality of prompt-based profile generation.
arXiv Detail & Related papers (2024-09-25T07:06:14Z) - CURE4Rec: A Benchmark for Recommendation Unlearning with Deeper Influence [55.21518669075263]
CURE4Rec is the first comprehensive benchmark for recommendation unlearning evaluation.
We consider the deeper influence of unlearning on recommendation fairness and robustness towards data with varying impact levels.
arXiv Detail & Related papers (2024-08-26T16:21:50Z) - LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning [40.53821858897774]
We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
arXiv Detail & Related papers (2024-06-22T14:14:03Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - RecPrompt: A Self-tuning Prompting Framework for News Recommendation Using Large Language Models [12.28603831152324]
We introduce RecPrompt, the first self-tuning prompting framework for news recommendation.
We also introduce TopicScore, a novel metric to assess explainability.
arXiv Detail & Related papers (2023-12-16T14:42:46Z) - Query-Dependent Prompt Evaluation and Optimization with Offline Inverse
RL [62.824464372594576]
We aim to enhance arithmetic reasoning ability of Large Language Models (LLMs) through zero-shot prompt optimization.
We identify a previously overlooked objective of query dependency in such optimization.
We introduce Prompt-OIRL, which harnesses offline inverse reinforcement learning to draw insights from offline prompting demonstration data.
arXiv Detail & Related papers (2023-09-13T01:12:52Z) - RecMind: Large Language Model Powered Agent For Recommendation [16.710558148184205]
RecMind is an autonomous recommender agent with careful planning for zero-shot personalized recommendations.
Our experiment shows that RecMind outperforms existing zero/few-shot LLM-based recommendation baseline methods in various tasks.
arXiv Detail & Related papers (2023-08-28T04:31:04Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.