Counterfactual Explainable Recommendation
- URL: http://arxiv.org/abs/2108.10539v1
- Date: Tue, 24 Aug 2021 06:37:57 GMT
- Title: Counterfactual Explainable Recommendation
- Authors: Juntao Tan, Shuyuan Xu, Yingqiang Ge, Yunqi Li, Xu Chen, Yongfeng
Zhang
- Abstract summary: We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
- Score: 22.590877963169103
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: By providing explanations for users and system designers to facilitate better
understanding and decision making, explainable recommendation has been an
important research problem. In this paper, we propose Counterfactual
Explainable Recommendation (CountER), which takes the insights of
counterfactual reasoning from causal inference for explainable recommendation.
CountER is able to formulate the complexity and the strength of explanations,
and it adopts a counterfactual learning framework to seek simple (low
complexity) and effective (high strength) explanations for the model decision.
Technically, for each item recommended to each user, CountER formulates a joint
optimization problem to generate minimal changes on the item aspects so as to
create a counterfactual item, such that the recommendation decision on the
counterfactual item is reversed. These altered aspects constitute the
explanation of why the original item is recommended. The counterfactual
explanation helps both the users for better understanding and the system
designers for better model debugging. Another contribution of the work is the
evaluation of explainable recommendation, which has been a challenging task.
Fortunately, counterfactual explanations are very suitable for standard
quantitative evaluation. To measure the explanation quality, we design two
types of evaluation metrics, one from user's perspective (i.e. why the user
likes the item), and the other from model's perspective (i.e. why the item is
recommended by the model). We apply our counterfactual learning algorithm on a
black-box recommender system and evaluate the generated explanations on five
real-world datasets. Results show that our model generates more accurate and
effective explanations than state-of-the-art explainable recommendation models.
Related papers
- Path-based summary explanations for graph recommenders -- extended version [2.2789818122188925]
We propose summary explanations that highlight why a user or a group of users receive a set of item recommendations.
We also present a novel method to summarize explanations using efficient graph algorithms.
arXiv Detail & Related papers (2024-10-29T13:10:03Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception [53.4840989321394]
We analyze the effect of rationales generated by QA models to support their answers.
We present users with incorrect answers and corresponding rationales in various formats.
We measure the effectiveness of this feedback in patching these rationales through in-context learning.
arXiv Detail & Related papers (2023-11-16T04:26:32Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Learning to Counterfactually Explain Recommendations [14.938252589829673]
We propose a learning-based framework to generate counterfactual explanations.
To generate an explanation, we find the history subset predicted by the surrogate model that is most likely to remove the recommendation.
arXiv Detail & Related papers (2022-11-17T18:21:21Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Comparative Explanations of Recommendations [33.89230323979306]
We develop an extract-and-refine architecture to explain the relative comparisons among a set of ranked items from a recommender system.
We design a new explanation quality metric based on BLEU to guide the end-to-end training of the extraction and refinement components.
arXiv Detail & Related papers (2021-11-01T02:55:56Z) - Explanation as a Defense of Recommendation [34.864709791648195]
We propose to enforce the idea of sentiment alignment between a recommendation and its corresponding explanation.
Our solution outperforms a rich set of baselines in both recommendation and explanation tasks.
arXiv Detail & Related papers (2021-01-24T06:34:36Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.