Explanation as a Defense of Recommendation
- URL: http://arxiv.org/abs/2101.09656v1
- Date: Sun, 24 Jan 2021 06:34:36 GMT
- Title: Explanation as a Defense of Recommendation
- Authors: Aobo Yang, Nan Wang, Hongbo Deng, Hongning Wang
- Abstract summary: We propose to enforce the idea of sentiment alignment between a recommendation and its corresponding explanation.
Our solution outperforms a rich set of baselines in both recommendation and explanation tasks.
- Score: 34.864709791648195
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Textual explanations have proved to help improve user satisfaction on
machine-made recommendations. However, current mainstream solutions loosely
connect the learning of explanation with the learning of recommendation: for
example, they are often separately modeled as rating prediction and content
generation tasks. In this work, we propose to strengthen their connection by
enforcing the idea of sentiment alignment between a recommendation and its
corresponding explanation. At training time, the two learning tasks are joined
by a latent sentiment vector, which is encoded by the recommendation module and
used to make word choices for explanation generation. At both training and
inference time, the explanation module is required to generate explanation text
that matches sentiment predicted by the recommendation module. Extensive
experiments demonstrate our solution outperforms a rich set of baselines in
both recommendation and explanation tasks, especially on the improved quality
of its generated explanations. More importantly, our user studies confirm our
generated explanations help users better recognize the differences between
recommended items and understand why an item is recommended.
Related papers
- Path-based summary explanations for graph recommenders -- extended version [2.2789818122188925]
We propose summary explanations that highlight why a user or a group of users receive a set of item recommendations.
We also present a novel method to summarize explanations using efficient graph algorithms.
arXiv Detail & Related papers (2024-10-29T13:10:03Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Complementary Explanations for Effective In-Context Learning [77.83124315634386]
Large language models (LLMs) have exhibited remarkable capabilities in learning from explanations in prompts.
This work aims to better understand the mechanisms by which explanations are used for in-context learning.
arXiv Detail & Related papers (2022-11-25T04:40:47Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Comparative Explanations of Recommendations [33.89230323979306]
We develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system.
We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components.
arXiv Detail & Related papers (2021-11-01T02:55:56Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Evaluating Explanations: How much do explanations from the teacher aid
students? [103.05037537415811]
We formalize the value of explanations using a student-teacher paradigm that measures the extent to which explanations improve student models in learning.
Unlike many prior proposals to evaluate explanations, our approach cannot be easily gamed, enabling principled, scalable, and automatic evaluation of attributions.
arXiv Detail & Related papers (2020-12-01T23:40:21Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Sequential Recommendation with Self-Attentive Multi-Adversarial Network [101.25533520688654]
We present a Multi-Factor Generative Adversarial Network (MFGAN) for explicitly modeling the effect of context information on sequential recommendation.
Our framework is flexible to incorporate multiple kinds of factor information, and is able to trace how each factor contributes to the recommendation decision over time.
arXiv Detail & Related papers (2020-05-21T12:28:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.