Path-based summary explanations for graph recommenders -- extended version
- URL: http://arxiv.org/abs/2410.22020v1
- Date: Tue, 29 Oct 2024 13:10:03 GMT
- Title: Path-based summary explanations for graph recommenders -- extended version
- Authors: Danae Pla Karidi, Evaggelia Pitoura,
- Abstract summary: We propose summary explanations that highlight why a user or a group of users receive a set of item recommendations.
We also present a novel method to summarize explanations using efficient graph algorithms.
- Score: 2.2789818122188925
- License:
- Abstract: Path-based explanations provide intrinsic insights into graph-based recommendation models. However, most previous work has focused on explaining an individual recommendation of an item to a user. In this paper, we propose summary explanations, i.e., explanations that highlight why a user or a group of users receive a set of item recommendations and why an item, or a group of items, is recommended to a set of users as an effective means to provide insights into the collective behavior of the recommender. We also present a novel method to summarize explanations using efficient graph algorithms, specifically the Steiner Tree and the Prize-Collecting Steiner Tree. Our approach reduces the size and complexity of summary explanations while preserving essential information, making explanations more comprehensible for users and more useful to model developers. Evaluations across multiple metrics demonstrate that our summaries outperform baseline explanation methods in most scenarios, in a variety of quality aspects.
Related papers
- Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Graph-based Extractive Explainer for Recommendations [38.278148661173525]
We develop a graph attentive neural network model that seamlessly integrates user, item, attributes, and sentences for extraction-based explanation.
To balance individual sentence relevance, overall attribute coverage, and content redundancy, we solve an integer linear programming problem to make the final selection of sentences.
arXiv Detail & Related papers (2022-02-20T04:56:10Z) - Comparative Explanations of Recommendations [33.89230323979306]
We develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system.
We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components.
arXiv Detail & Related papers (2021-11-01T02:55:56Z) - Hierarchical Aspect-guided Explanation Generation for Explainable
Recommendation [37.36148651206039]
We propose a novel explanation generation framework, named Hierarchical Aspect-guided explanation Generation (HAG)
An aspect-guided graph pooling operator is proposed to extract the aspect-relevant information from the review-based syntax graphs.
Then, a hierarchical explanation decoder is developed to generate aspects and aspect-relevant explanations based on the attention mechanism.
arXiv Detail & Related papers (2021-10-20T03:28:58Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Rating and aspect-based opinion graph embeddings for explainable
recommendations [69.9674326582747]
We propose to exploit embeddings extracted from graphs that combine information from ratings and aspect-based opinions expressed in textual reviews.
We then adapt and evaluate state-of-the-art graph embedding techniques over graphs generated from Amazon and Yelp reviews on six domains, outperforming baseline recommenders.
arXiv Detail & Related papers (2021-07-07T14:07:07Z) - Graphing else matters: exploiting aspect opinions and ratings in
explainable graph-based recommendations [66.83527496838937]
We propose to exploit embeddings extracted from graphs that combine information from ratings and aspect-based opinions expressed in textual reviews.
We then adapt and evaluate state-of-the-art graph embedding techniques over graphs generated from Amazon and Yelp reviews on six domains.
Our approach has the advantage of providing explanations which leverage aspect-based opinions given by users about recommended items.
arXiv Detail & Related papers (2021-07-07T13:57:28Z) - A Survey on Knowledge Graph-Based Recommender Systems [65.50486149662564]
We conduct a systematical survey of knowledge graph-based recommender systems.
We focus on how the papers utilize the knowledge graph for accurate and explainable recommendation.
We introduce datasets used in these works.
arXiv Detail & Related papers (2020-02-28T02:26:30Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.