Generate Natural Language Explanations for Recommendation
- URL: http://arxiv.org/abs/2101.03392v1
- Date: Sat, 9 Jan 2021 17:00:41 GMT
- Title: Generate Natural Language Explanations for Recommendation
- Authors: Hanxiong Chen, Xu Chen, Shaoyun Shi, Yongfeng Zhang
- Abstract summary: We propose to generate free-text natural language explanations for personalized recommendation.
In particular, we propose a hierarchical sequence-to-sequence model (HSS) for personalized explanation generation.
- Score: 25.670144526037134
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing personalized explanations for recommendations can help users to
understand the underlying insight of the recommendation results, which is
helpful to the effectiveness, transparency, persuasiveness and trustworthiness
of recommender systems. Current explainable recommendation models mostly
generate textual explanations based on pre-defined sentence templates. However,
the expressiveness power of template-based explanation sentences are limited to
the pre-defined expressions, and manually defining the expressions require
significant human efforts. Motivated by this problem, we propose to generate
free-text natural language explanations for personalized recommendation. In
particular, we propose a hierarchical sequence-to-sequence model (HSS) for
personalized explanation generation. Different from conventional sentence
generation in NLP research, a great challenge of explanation generation in
e-commerce recommendation is that not all sentences in user reviews are of
explanation purpose. To solve the problem, we further propose an auto-denoising
mechanism based on topical item feature words for sentence generation.
Experiments on various e-commerce product domains show that our approach can
not only improve the recommendation accuracy, but also the explanation quality
in terms of the offline measures and feature words coverage. This research is
one of the initial steps to grant intelligent agents with the ability to
explain itself based on natural language sentences.
Related papers
- DeSTA: Enhancing Speech Language Models through Descriptive Speech-Text Alignment [82.86363991170546]
We propose a Descriptive Speech-Text Alignment approach that leverages speech captioning to bridge the gap between speech and text modalities.
Our model demonstrates superior performance on the Dynamic-SUPERB benchmark, particularly in generalizing to unseen tasks.
These findings highlight the potential to reshape instruction-following SLMs by incorporating descriptive rich, speech captions.
arXiv Detail & Related papers (2024-06-27T03:52:35Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Knowledge-grounded Natural Language Recommendation Explanation [11.58207109487333]
We propose a knowledge graph (KG) approach to natural language explainable recommendation.
Our approach draws on user-item features through a novel collaborative filtering-based KG representation.
Experimental results show that our approach consistently outperforms previous state-of-the-art models on natural language explainable recommendation.
arXiv Detail & Related papers (2023-08-30T07:36:12Z) - Exploring Large Language Model for Graph Data Understanding in Online
Job Recommendations [63.19448893196642]
We present a novel framework that harnesses the rich contextual information and semantic representations provided by large language models to analyze behavior graphs.
By leveraging this capability, our framework enables personalized and accurate job recommendations for individual users.
arXiv Detail & Related papers (2023-07-10T11:29:41Z) - Abductive Commonsense Reasoning Exploiting Mutually Exclusive
Explanations [118.0818807474809]
Abductive reasoning aims to find plausible explanations for an event.
Existing approaches for abductive reasoning in natural language processing often rely on manually generated annotations for supervision.
This work proposes an approach for abductive commonsense reasoning that exploits the fact that only a subset of explanations is correct for a given context.
arXiv Detail & Related papers (2023-05-24T01:35:10Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - UCEpic: Unifying Aspect Planning and Lexical Constraints for Generating
Explanations in Recommendation [26.307290414735643]
We propose a model, UCEpic, that generates high-quality personalized explanations for recommendation results.
UCEpic unifies aspect planning and lexical constraints into one framework and generates explanations under different settings.
Compared to previous recommendation explanation generators controlled by only aspects, UCEpic incorporates specific information from keyphrases.
arXiv Detail & Related papers (2022-09-28T07:33:50Z) - Graph-based Extractive Explainer for Recommendations [38.278148661173525]
We develop a graph attentive neural network model that seamlessly integrates user, item, attributes, and sentences for extraction-based explanation.
To balance individual sentence relevance, overall attribute coverage, and content redundancy, we solve an integer linear programming problem to make the final selection of sentences.
arXiv Detail & Related papers (2022-02-20T04:56:10Z) - Hierarchical Aspect-guided Explanation Generation for Explainable
Recommendation [37.36148651206039]
We propose a novel explanation generation framework, named Hierarchical Aspect-guided explanation Generation (HAG)
An aspect-guided graph pooling operator is proposed to extract the aspect-relevant information from the review-based syntax graphs.
Then, a hierarchical explanation decoder is developed to generate aspects and aspect-relevant explanations based on the attention mechanism.
arXiv Detail & Related papers (2021-10-20T03:28:58Z) - Towards Interpretable Natural Language Understanding with Explanations
as Latent Variables [146.83882632854485]
We develop a framework for interpretable natural language understanding that requires only a small set of human annotated explanations for training.
Our framework treats natural language explanations as latent variables that model the underlying reasoning process of a neural model.
arXiv Detail & Related papers (2020-10-24T02:05:56Z) - Improving Adversarial Text Generation by Modeling the Distant Future [155.83051741029732]
We consider a text planning scheme and present a model-based imitation-learning approach to alleviate the aforementioned issues.
We propose a novel guider network to focus on the generative process over a longer horizon, which can assist next-word prediction and provide intermediate rewards for generator optimization.
arXiv Detail & Related papers (2020-05-04T05:45:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.