Reinforced Path Reasoning for Counterfactual Explainable Recommendation
- URL: http://arxiv.org/abs/2207.06674v1
- Date: Thu, 14 Jul 2022 05:59:58 GMT
- Title: Reinforced Path Reasoning for Counterfactual Explainable Recommendation
- Authors: Xiangmeng Wang, Qian Li, Dianer Yu, Guandong Xu
- Abstract summary: We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
- Score: 10.36395995374108
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Counterfactual explanations interpret the recommendation mechanism via
exploring how minimal alterations on items or users affect the recommendation
decisions. Existing counterfactual explainable approaches face huge search
space and their explanations are either action-based (e.g., user click) or
aspect-based (i.e., item description). We believe item attribute-based
explanations are more intuitive and persuadable for users since they explain by
fine-grained item demographic features (e.g., brand). Moreover, counterfactual
explanation could enhance recommendations by filtering out negative items.
In this work, we propose a novel Counterfactual Explainable Recommendation
(CERec) to generate item attribute-based counterfactual explanations meanwhile
to boost recommendation performance. Our CERec optimizes an explanation policy
upon uniformly searching candidate counterfactuals within a reinforcement
learning environment. We reduce the huge search space with an adaptive path
sampler by using rich context information of a given knowledge graph. We also
deploy the explanation policy to a recommendation model to enhance the
recommendation. Extensive explainability and recommendation evaluations
demonstrate CERec's ability to provide explanations consistent with user
preferences and maintain improved recommendations. We release our code at
https://github.com/Chrystalii/CERec.
Related papers
- Path-based summary explanations for graph recommenders -- extended version [2.2789818122188925]
We propose summary explanations that highlight why a user or a group of users receive a set of item recommendations.
We also present a novel method to summarize explanations using efficient graph algorithms.
arXiv Detail & Related papers (2024-10-29T13:10:03Z) - Beyond Persuasion: Towards Conversational Recommender System with Credible Explanations [63.05026345443155]
We propose a simple yet effective method, called PC-CRS, to enhance the credibility of CRS's explanations during persuasion.
Experimental results demonstrate the efficacy of PC-CRS in promoting persuasive and credible explanations.
Further analysis reveals the reason behind current methods producing incredible explanations and the potential of credible explanations to improve recommendation accuracy.
arXiv Detail & Related papers (2024-09-22T11:35:59Z) - Stability of Explainable Recommendation [10.186029242664931]
We study the vulnerability of existent feature-oriented explainable recommenders.
We observe that all the explainable models are vulnerable to increased noise levels.
Our study presents an empirical verification on the topic of robust explanations in recommender systems.
arXiv Detail & Related papers (2024-05-03T04:44:51Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - RecXplainer: Amortized Attribute-based Personalized Explanations for
Recommender Systems [35.57265154621778]
We propose RecXplainer, a novel method for generating fine-grained explanations based on a user's preferences over the attributes of recommended items.
We evaluate RecXplainer on five real-world and large-scale recommendation datasets.
arXiv Detail & Related papers (2022-11-27T21:00:31Z) - Breaking Feedback Loops in Recommender Systems with Causal Inference [99.22185950608838]
Recent work has shown that feedback loops may compromise recommendation quality and homogenize user behavior.
We propose the Causal Adjustment for Feedback Loops (CAFL), an algorithm that provably breaks feedback loops using causal inference.
We show that CAFL improves recommendation quality when compared to prior correction methods.
arXiv Detail & Related papers (2022-07-04T17:58:39Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Explanation as a Defense of Recommendation [34.864709791648195]
We propose to enforce the idea of sentiment alignment between a recommendation and its corresponding explanation.
Our solution outperforms a rich set of baselines in both recommendation and explanation tasks.
arXiv Detail & Related papers (2021-01-24T06:34:36Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.