Unlocking the Potential of Large Language Models for Explainable
Recommendations
- URL: http://arxiv.org/abs/2312.15661v3
- Date: Wed, 3 Jan 2024 08:06:51 GMT
- Title: Unlocking the Potential of Large Language Models for Explainable
Recommendations
- Authors: Yucong Luo, Mingyue Cheng, Hao Zhang, Junyu Lu, Qi Liu, Enhong Chen
- Abstract summary: It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
- Score: 55.29843710657637
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating user-friendly explanations regarding why an item is recommended
has become increasingly common, largely due to advances in language generation
technology, which can enhance user trust and facilitate more informed
decision-making when using online services. However, existing explainable
recommendation systems focus on using small-size language models. It remains
uncertain what impact replacing the explanation generator with the recently
emerging large language models (LLMs) would have. Can we expect unprecedented
results?
In this study, we propose LLMXRec, a simple yet effective two-stage
explainable recommendation framework aimed at further boosting the explanation
quality by employing LLMs. Unlike most existing LLM-based recommendation works,
a key characteristic of LLMXRec is its emphasis on the close collaboration
between previous recommender models and LLM-based explanation generators.
Specifically, by adopting several key fine-tuning techniques, including
parameter-efficient instructing tuning and personalized prompt techniques,
controllable and fluent explanations can be well generated to achieve the goal
of explanation recommendation. Most notably, we provide three different
perspectives to evaluate the effectiveness of the explanations. Finally, we
conduct extensive experiments over several benchmark recommender models and
publicly available datasets. The experimental results not only yield positive
results in terms of effectiveness and efficiency but also uncover some
previously unknown outcomes. To facilitate further explorations in this area,
the full code and detailed original results are open-sourced at
https://github.com/GodFire66666/LLM_rec_explanation/.
Related papers
- ReasoningRec: Bridging Personalized Recommendations and Human-Interpretable Explanations through LLM Reasoning [15.049688896236821]
This paper presents ReasoningRec, a reasoning-based recommendation framework.
ReasoningRec bridges the gap between recommendations and human-interpretable explanations.
Empirical evaluations demonstrate that ReasoningRec surpasses state-of-the-art methods by up to 12.5% in recommendation prediction.
arXiv Detail & Related papers (2024-10-30T16:37:04Z) - User Preferences for Large Language Model versus Template-Based Explanations of Movie Recommendations: A Pilot Study [0.6965384453064829]
Large language models (LLMs) can generate more resonant explanations for recommender systems.
We conducted a pilot study with 25 participants.
Although subject to high variance, preliminary findings suggest that LLM-based explanations may provide a richer and more engaging user experience.
arXiv Detail & Related papers (2024-09-10T07:51:53Z) - XRec: Large Language Models for Explainable Recommendation [5.615321475217167]
We introduce a model-agnostic framework called XRec, which enables Large Language Models to provide explanations for user behaviors in recommender systems.
Our experiments demonstrate XRec's ability to generate comprehensive and meaningful explanations that outperform baseline approaches in explainable recommender systems.
arXiv Detail & Related papers (2024-06-04T14:55:14Z) - Uncertainty-Aware Explainable Recommendation with Large Language Models [15.229417987212631]
We develop a model that utilizes the ID vectors of user and item inputs as prompts for GPT-2.
We employ a joint training mechanism within a multi-task learning framework to optimize both the recommendation task and explanation task.
Our method achieves 1.59 DIV, 0.57 USR and 0.41 FCR on the Yelp, TripAdvisor and Amazon dataset respectively.
arXiv Detail & Related papers (2024-01-31T14:06:26Z) - RecExplainer: Aligning Large Language Models for Explaining Recommendation Models [50.74181089742969]
Large language models (LLMs) have demonstrated remarkable intelligence in understanding, reasoning, and instruction following.
This paper presents the initial exploration of using LLMs as surrogate models to explain black-box recommender models.
To facilitate an effective alignment, we introduce three methods: behavior alignment, intention alignment, and hybrid alignment.
arXiv Detail & Related papers (2023-11-18T03:05:43Z) - LLMRec: Benchmarking Large Language Models on Recommendation Task [54.48899723591296]
The application of Large Language Models (LLMs) in the recommendation domain has not been thoroughly investigated.
We benchmark several popular off-the-shelf LLMs on five recommendation tasks, including rating prediction, sequential recommendation, direct recommendation, explanation generation, and review summarization.
The benchmark results indicate that LLMs displayed only moderate proficiency in accuracy-based tasks such as sequential and direct recommendation.
arXiv Detail & Related papers (2023-08-23T16:32:54Z) - Evaluating and Explaining Large Language Models for Code Using Syntactic
Structures [74.93762031957883]
This paper introduces ASTxplainer, an explainability method specific to Large Language Models for code.
At its core, ASTxplainer provides an automated method for aligning token predictions with AST nodes.
We perform an empirical evaluation on 12 popular LLMs for code using a curated dataset of the most popular GitHub projects.
arXiv Detail & Related papers (2023-08-07T18:50:57Z) - LLM-Rec: Personalized Recommendation via Prompting Large Language Models [62.481065357472964]
Large language models (LLMs) have showcased their ability to harness commonsense knowledge and reasoning.
Recent advances in large language models (LLMs) have showcased their remarkable ability to harness commonsense knowledge and reasoning.
This study introduces a novel approach, coined LLM-Rec, which incorporates four distinct prompting strategies of text enrichment for improving personalized text-based recommendations.
arXiv Detail & Related papers (2023-07-24T18:47:38Z) - GenRec: Large Language Model for Generative Recommendation [41.22833600362077]
This paper presents an innovative approach to recommendation systems using large language models (LLMs) based on text data.
GenRec uses LLM's understanding ability to interpret context, learn user preferences, and generate relevant recommendation.
Our research underscores the potential of LLM-based generative recommendation in revolutionizing the domain of recommendation systems.
arXiv Detail & Related papers (2023-07-02T02:37:07Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.