Hierarchical Aspect-guided Explanation Generation for Explainable
Recommendation
- URL: http://arxiv.org/abs/2110.10358v1
- Date: Wed, 20 Oct 2021 03:28:58 GMT
- Title: Hierarchical Aspect-guided Explanation Generation for Explainable
Recommendation
- Authors: Yidan Hu, Yong Liu, Chunyan Miao, Gongqi Lin, Yuan Miao
- Abstract summary: We propose a novel explanation generation framework, named Hierarchical Aspect-guided explanation Generation (HAG)
An aspect-guided graph pooling operator is proposed to extract the aspect-relevant information from the review-based syntax graphs.
Then, a hierarchical explanation decoder is developed to generate aspects and aspect-relevant explanations based on the attention mechanism.
- Score: 37.36148651206039
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable recommendation systems provide explanations for recommendation
results to improve their transparency and persuasiveness. The existing
explainable recommendation methods generate textual explanations without
explicitly considering the user's preferences on different aspects of the item.
In this paper, we propose a novel explanation generation framework, named
Hierarchical Aspect-guided explanation Generation (HAG), for explainable
recommendation. Specifically, HAG employs a review-based syntax graph to
provide a unified view of the user/item details. An aspect-guided graph pooling
operator is proposed to extract the aspect-relevant information from the
review-based syntax graphs to model the user's preferences on an item at the
aspect level. Then, a hierarchical explanation decoder is developed to generate
aspects and aspect-relevant explanations based on the attention mechanism. The
experimental results on three real datasets indicate that HAG outperforms
state-of-the-art explanation generation methods in both single-aspect and
multi-aspect explanation generation tasks, and also achieves comparable or even
better preference prediction accuracy than strong baseline methods.
Related papers
- Path-based summary explanations for graph recommenders -- extended version [2.2789818122188925]
We propose summary explanations that highlight why a user or a group of users receive a set of item recommendations.
We also present a novel method to summarize explanations using efficient graph algorithms.
arXiv Detail & Related papers (2024-10-29T13:10:03Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - UCEpic: Unifying Aspect Planning and Lexical Constraints for Generating
Explanations in Recommendation [26.307290414735643]
We propose a model, UCEpic, that generates high-quality personalized explanations for recommendation results.
UCEpic unifies aspect planning and lexical constraints into one framework and generates explanations under different settings.
Compared to previous recommendation explanation generators controlled by only aspects, UCEpic incorporates specific information from keyphrases.
arXiv Detail & Related papers (2022-09-28T07:33:50Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - Generate Natural Language Explanations for Recommendation [25.670144526037134]
We propose to generate free-text natural language explanations for personalized recommendation.
In particular, we propose a hierarchical sequence-to-sequence model (HSS) for personalized explanation generation.
arXiv Detail & Related papers (2021-01-09T17:00:41Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.