Explainable Recommender with Geometric Information Bottleneck
- URL: http://arxiv.org/abs/2305.05331v2
- Date: Fri, 5 Jan 2024 22:02:25 GMT
- Title: Explainable Recommender with Geometric Information Bottleneck
- Authors: Hanqi Yan, Lin Gui, Menghan Wang, Kun Zhang, Yulan He
- Abstract summary: We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
- Score: 25.703872435370585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable recommender systems can explain their recommendation decisions,
enhancing user trust in the systems. Most explainable recommender systems
either rely on human-annotated rationales to train models for explanation
generation or leverage the attention mechanism to extract important text spans
from reviews as explanations. The extracted rationales are often confined to an
individual review and may fail to identify the implicit features beyond the
review text. To avoid the expensive human annotation process and to generate
explanations beyond individual reviews, we propose to incorporate a geometric
prior learnt from user-item interactions into a variational network which
infers latent factors from user-item reviews. The latent factors from an
individual user-item pair can be used for both recommendation and explanation
generation, which naturally inherit the global characteristics encoded in the
prior knowledge. Experimental results on three e-commerce datasets show that
our model significantly improves the interpretability of a variational
recommender using the Wasserstein distance while achieving performance
comparable to existing content-based recommender systems in terms of
recommendation behaviours.
Related papers
- Learning Recommender Systems with Soft Target: A Decoupled Perspective [49.83787742587449]
We propose a novel decoupled soft label optimization framework to consider the objectives as two aspects by leveraging soft labels.
We present a sensible soft-label generation algorithm that models a label propagation algorithm to explore users' latent interests in unobserved feedback via neighbors.
arXiv Detail & Related papers (2024-10-09T04:20:15Z) - Aligning Explanations for Recommendation with Rating and Feature via Maximizing Mutual Information [29.331050754362803]
Current explanation generation methods are commonly trained with an objective to mimic existing user reviews.
We propose a flexible model-agnostic method named MMI framework to enhance the alignment between the generated natural language explanations and the predicted rating/important item features.
Our MMI framework can boost different backbone models, enabling them to outperform existing baselines in terms of alignment with predicted ratings and item features.
arXiv Detail & Related papers (2024-07-18T08:29:55Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Stability of Explainable Recommendation [10.186029242664931]
We study the vulnerability of existent feature-oriented explainable recommenders.
We observe that all the explainable models are vulnerable to increased noise levels.
Our study presents an empirical verification on the topic of robust explanations in recommender systems.
arXiv Detail & Related papers (2024-05-03T04:44:51Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Knowledge-grounded Natural Language Recommendation Explanation [11.58207109487333]
We propose a knowledge graph (KG) approach to natural language explainable recommendation.
Our approach draws on user-item features through a novel collaborative filtering-based KG representation.
Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation.
arXiv Detail & Related papers (2023-08-30T07:36:12Z) - Graph-based Extractive Explainer for Recommendations [38.278148661173525]
We develop a graph attentive neural network model that seamlessly integrates user, item, attributes, and sentences for extraction-based explanation.
To balance individual sentence relevance, overall attribute coverage, and content redundancy, we solve an integer linear programming problem to make the final selection of sentences.
arXiv Detail & Related papers (2022-02-20T04:56:10Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Interacting with Explanations through Critiquing [40.69540222716043]
We present a technique that learns to generate personalized explanations of recommendations from review texts.
We show that human users significantly prefer these explanations over those produced by state-of-the-art techniques.
Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation.
arXiv Detail & Related papers (2020-05-22T09:03:06Z) - Sequential Recommendation with Self-Attentive Multi-Adversarial Network [101.25533520688654]
We present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation.
Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time.
arXiv Detail & Related papers (2020-05-21T12:28:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.