Enhancing Recommendation Explanations through User-Centric Refinement
- URL: http://arxiv.org/abs/2502.11721v1
- Date: Mon, 17 Feb 2025 12:08:18 GMT
- Title: Enhancing Recommendation Explanations through User-Centric Refinement
- Authors: Jingsen Zhang, Zihang Tian, Xueyang Feng, Xu Chen,
- Abstract summary: We propose a novel paradigm that refines initial explanations generated by existing explainable recommender models.
Specifically, we introduce a multi-agent collaborative refinement framework based on large language models.
- Score: 7.640281193938638
- License:
- Abstract: Generating natural language explanations for recommendations has become increasingly important in recommender systems. Traditional approaches typically treat user reviews as ground truth for explanations and focus on improving review prediction accuracy by designing various model architectures. However, due to limitations in data scale and model capability, these explanations often fail to meet key user-centric aspects such as factuality, personalization, and sentiment coherence, significantly reducing their overall helpfulness to users. In this paper, we propose a novel paradigm that refines initial explanations generated by existing explainable recommender models during the inference stage to enhance their quality in multiple aspects. Specifically, we introduce a multi-agent collaborative refinement framework based on large language models. To ensure alignment between the refinement process and user demands, we employ a plan-then-refine pattern to perform targeted modifications. To enable continuous improvements, we design a hierarchical reflection mechanism that provides feedback on the refinement process from both strategic and content perspectives. Extensive experiments on three datasets demonstrate the effectiveness of our framework.
Related papers
- Reason4Rec: Large Language Models for Recommendation with Deliberative User Preference Alignment [69.11529841118671]
We propose a new Deliberative Recommendation task, which incorporates explicit reasoning about user preferences as an additional alignment goal.
We then introduce the Reasoning-powered Recommender framework for deliberative user preference alignment.
arXiv Detail & Related papers (2025-02-04T07:17:54Z) - Defeasible Visual Entailment: Benchmark, Evaluator, and Reward-Driven Optimization [19.32714581384729]
We introduce a new task called Defeasible Visual Entailment (DVE)
The goal is to allow the modification of the entailment relationship between an image premise and a text hypothesis based on an additional update.
At a high level, DVE enables models to refine their initial interpretations, leading to improved accuracy and reliability in various applications.
arXiv Detail & Related papers (2024-12-19T02:38:31Z) - Aligning Explanations for Recommendation with Rating and Feature via Maximizing Mutual Information [29.331050754362803]
Current explanation generation methods are commonly trained with an objective to mimic existing user reviews.
We propose a flexible model-agnostic method named MMI framework to enhance the alignment between the generated natural language explanations and the predicted rating/important item features.
Our MMI framework can boost different backbone models, enabling them to outperform existing baselines in terms of alignment with predicted ratings and item features.
arXiv Detail & Related papers (2024-07-18T08:29:55Z) - Beyond Thumbs Up/Down: Untangling Challenges of Fine-Grained Feedback for Text-to-Image Generation [67.88747330066049]
Fine-grained feedback captures nuanced distinctions in image quality and prompt-alignment.
We show that demonstrating its superiority to coarse-grained feedback is not automatic.
We identify key challenges in eliciting and utilizing fine-grained feedback.
arXiv Detail & Related papers (2024-06-24T17:19:34Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Fast Multi-Step Critiquing for VAE-based Recommender Systems [27.207067974031805]
We present M&Ms-VAE, a novel variational autoencoder for recommendation and explanation.
We train the model under a weak supervision scheme to simulate both fully and partially observed variables.
We then leverage the generalization ability of a trained M&Ms-VAE model to embed the user preference and the critique separately.
arXiv Detail & Related papers (2021-05-03T12:26:09Z) - Forethought and Hindsight in Credit Assignment [62.05690959741223]
We work to understand the gains and peculiarities of planning employed as forethought via forward models or as hindsight operating with backward models.
We investigate the best use of models in planning, primarily focusing on the selection of states in which predictions should be (re)-evaluated.
arXiv Detail & Related papers (2020-10-26T16:00:47Z) - Enhancing Dialogue Generation via Multi-Level Contrastive Learning [57.005432249952406]
We propose a multi-level contrastive learning paradigm to model the fine-grained quality of the responses with respect to the query.
A Rank-aware (RC) network is designed to construct the multi-level contrastive optimization objectives.
We build a Knowledge Inference (KI) component to capture the keyword knowledge from the reference during training and exploit such information to encourage the generation of informative words.
arXiv Detail & Related papers (2020-09-19T02:41:04Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.