Explainable Recommender with Geometric Information Bottleneck
- URL: http://arxiv.org/abs/2305.05331v2
- Date: Fri, 5 Jan 2024 22:02:25 GMT
- Title: Explainable Recommender with Geometric Information Bottleneck
- Authors: Hanqi Yan, Lin Gui, Menghan Wang, Kun Zhang, Yulan He
- Abstract summary: We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
- Score: 25.703872435370585
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainable recommender systems can explain their recommendation decisions,
enhancing user trust in the systems. Most explainable recommender systems
either rely on human-annotated rationales to train models for explanation
generation or leverage the attention mechanism to extract important text spans
from reviews as explanations. The extracted rationales are often confined to an
individual review and may fail to identify the implicit features beyond the
review text. To avoid the expensive human annotation process and to generate
explanations beyond individual reviews, we propose to incorporate a geometric
prior learnt from user-item interactions into a variational network which
infers latent factors from user-item reviews. The latent factors from an
individual user-item pair can be used for both recommendation and explanation
generation, which naturally inherit the global characteristics encoded in the
prior knowledge. Experimental results on three e-commerce datasets show that
our model significantly improves the interpretability of a variational
recommender using the Wasserstein distance while achieving performance
comparable to existing content-based recommender systems in terms of
recommendation behaviours.
Related papers
- Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Stability of Explainable Recommendation [10.186029242664931]
We study the vulnerability of existent feature-oriented explainable recommenders.
We observe that all the explainable models are vulnerable to increased noise levels.
Our study presents an empirical verification on the topic of robust explanations in recommender systems.
arXiv Detail & Related papers (2024-05-03T04:44:51Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Knowledge-grounded Natural Language Recommendation Explanation [11.58207109487333]
We propose a knowledge graph (KG) approach to natural language explainable recommendation.
Our approach draws on user-item features through a novel collaborative filtering-based KG representation.
Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation.
arXiv Detail & Related papers (2023-08-30T07:36:12Z) - Graph-based Extractive Explainer for Recommendations [38.278148661173525]
We develop a graph attentive neural network model that seamlessly integrates user, item, attributes, and sentences for extraction-based explanation.
To balance individual sentence relevance, overall attribute coverage, and content redundancy, we solve an integer linear programming problem to make the final selection of sentences.
arXiv Detail & Related papers (2022-02-20T04:56:10Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Interacting with Explanations through Critiquing [40.69540222716043]
We present a technique that learns to generate personalized explanations of recommendations from review texts.
We show that human users significantly prefer these explanations over those produced by state-of-the-art techniques.
Our work's most important innovation is that it allows users to react to a recommendation by critiquing the textual explanation.
arXiv Detail & Related papers (2020-05-22T09:03:06Z) - Sequential Recommendation with Self-Attentive Multi-Adversarial Network [101.25533520688654]
We present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation.
Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time.
arXiv Detail & Related papers (2020-05-21T12:28:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.