LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning
- URL: http://arxiv.org/abs/2406.15859v2
- Date: Sun, 30 Jun 2024 02:13:19 GMT
- Title: LLM-Powered Explanations: Unraveling Recommendations Through Subgraph Reasoning
- Authors: Guangsi Shi, Xiaofeng Deng, Linhao Luo, Lijuan Xia, Lei Bao, Bei Ye, Fei Du, Shirui Pan, Yuxiao Li,
- Abstract summary: We introduce a novel recommender that synergies Large Language Models (LLMs) and Knowledge Graphs (KGs) to enhance the recommendation and provide interpretable results.
Our approach significantly enhances both the effectiveness and interpretability of recommender systems.
- Score: 40.53821858897774
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recommender systems are pivotal in enhancing user experiences across various web applications by analyzing the complicated relationships between users and items. Knowledge graphs(KGs) have been widely used to enhance the performance of recommender systems. However, KGs are known to be noisy and incomplete, which are hard to provide reliable explanations for recommendation results. An explainable recommender system is crucial for the product development and subsequent decision-making. To address these challenges, we introduce a novel recommender that synergies Large Language Models (LLMs) and KGs to enhance the recommendation and provide interpretable results. Specifically, we first harness the power of LLMs to augment KG reconstruction. LLMs comprehend and decompose user reviews into new triples that are added into KG. In this way, we can enrich KGs with explainable paths that express user preferences. To enhance the recommendation on augmented KGs, we introduce a novel subgraph reasoning module that effectively measures the importance of nodes and discovers reasoning for recommendation. Finally, these reasoning paths are fed into the LLMs to generate interpretable explanations of the recommendation results. Our approach significantly enhances both the effectiveness and interpretability of recommender systems, especially in cross-selling scenarios where traditional methods falter. The effectiveness of our approach has been rigorously tested on four open real-world datasets, with our methods demonstrating a superior performance over contemporary state-of-the-art techniques by an average improvement of 12%. The application of our model in a multinational engineering and technology company cross-selling recommendation system further underscores its practical utility and potential to redefine recommendation practices through improved accuracy and user trust.
Related papers
- Incorporate LLMs with Influential Recommender System [34.5820082133773]
proactive recommender systems recommend a sequence of items to guide user interest in the target item.
Existing methods struggle to construct a coherent influence path that builds up with items the user is likely to enjoy.
We introduce a novel approach named LLM-based Influence Path Planning (LLM-IPP)
Our approach maintains coherence between consecutive recommendations and enhances user acceptability of the recommended items.
arXiv Detail & Related papers (2024-09-07T13:41:37Z) - LANE: Logic Alignment of Non-tuning Large Language Models and Online Recommendation Systems for Explainable Reason Generation [15.972926854420619]
Leveraging large language models (LLMs) offers new opportunities for comprehensive recommendation logic generation.
Fine-tuning LLM models for recommendation tasks incurs high computational costs and alignment issues with existing systems.
In this work, our proposed effective strategy LANE aligns LLMs with online recommendation systems without additional LLMs tuning.
arXiv Detail & Related papers (2024-07-03T06:20:31Z) - Generative Explore-Exploit: Training-free Optimization of Generative Recommender Systems using LLM Optimizers [29.739736497044664]
We present a training-free approach for optimizing generative recommenders.
We propose a generative explore-exploit method that can not only exploit generated items with high engagement, but also actively explore and discover hidden population preferences.
arXiv Detail & Related papers (2024-06-07T20:41:59Z) - XRec: Large Language Models for Explainable Recommendation [5.615321475217167]
We introduce a model-agnostic framework called XRec, which enables Large Language Models to provide explanations for user behaviors in recommender systems.
Our experiments demonstrate XRec's ability to generate comprehensive and meaningful explanations that outperform baseline approaches in explainable recommender systems.
arXiv Detail & Related papers (2024-06-04T14:55:14Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Empowering recommender systems using automatically generated Knowledge
Graphs and Reinforcement Learning [3.6587485160470226]
We present two knowledge graph-based approaches for personalized article recommendations for a set of customers.
The first approach employs Reinforcement Learning and the second approach uses the XGBoost algorithm for recommending articles.
Both approaches make use of a KG generated from both structured (tabular data) and unstructured data.
arXiv Detail & Related papers (2023-07-11T03:24:54Z) - Recommender Systems in the Era of Large Language Models (LLMs) [62.0129013439038]
Large Language Models (LLMs) have revolutionized the fields of Natural Language Processing (NLP) and Artificial Intelligence (AI)
We conduct a comprehensive review of LLM-empowered recommender systems from various aspects including Pre-training, Fine-tuning, and Prompting.
arXiv Detail & Related papers (2023-07-05T06:03:40Z) - A Survey on Large Language Models for Recommendation [77.91673633328148]
Large Language Models (LLMs) have emerged as powerful tools in the field of Natural Language Processing (NLP)
This survey presents a taxonomy that categorizes these models into two major paradigms, respectively Discriminative LLM for Recommendation (DLLM4Rec) and Generative LLM for Recommendation (GLLM4Rec)
arXiv Detail & Related papers (2023-05-31T13:51:26Z) - Recommendation as Instruction Following: A Large Language Model
Empowered Recommendation Approach [83.62750225073341]
We consider recommendation as instruction following by large language models (LLMs)
We first design a general instruction format for describing the preference, intention, task form and context of a user in natural language.
Then we manually design 39 instruction templates and automatically generate a large amount of user-personalized instruction data.
arXiv Detail & Related papers (2023-05-11T17:39:07Z) - DSKReG: Differentiable Sampling on Knowledge Graph for Recommendation
with Relational GNN [59.160401038969795]
We propose differentiable sampling on Knowledge Graph for Recommendation with GNN (DSKReG)
We devise a differentiable sampling strategy, which enables the selection of relevant items to be jointly optimized with the model training procedure.
The experimental results demonstrate that our model outperforms state-of-the-art KG-based recommender systems.
arXiv Detail & Related papers (2021-08-26T16:19:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.