Learning to Counterfactually Explain Recommendations
- URL: http://arxiv.org/abs/2211.09752v1
- Date: Thu, 17 Nov 2022 18:21:21 GMT
- Title: Learning to Counterfactually Explain Recommendations
- Authors: Yuanshun Yao, Chong Wang, Hang Li
- Abstract summary: We propose a learning-based framework to generate counterfactual explanations.
To generate an explanation, we find the history subset predicted by the surrogate model that is most likely to remove the recommendation.
- Score: 14.938252589829673
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Recommender system practitioners are facing increasing pressure to explain
recommendations. We explore how to explain recommendations using counterfactual
logic, i.e. "Had you not interacted with the following items before, it is
likely we would not recommend this item." Compared to traditional explanation
logic, counterfactual explanations are easier to understand and more
technically verifiable. The major challenge of generating such explanations is
the computational cost because it requires repeatedly retraining the models to
obtain the effect on a recommendation caused by removing user (interaction)
history. We propose a learning-based framework to generate counterfactual
explanations. The key idea is to train a surrogate model to learn the effect of
removing a subset of user history on the recommendation. To this end, we first
artificially simulate the counterfactual outcomes on the recommendation after
deleting subsets of history. Then we train surrogate models to learn the
mapping between a history deletion and the change in the recommendation caused
by the deletion. Finally, to generate an explanation, we find the history
subset predicted by the surrogate model that is most likely to remove the
recommendation. Through offline experiments and online user studies, we show
our method, compared to baselines, can generate explanations that are more
counterfactually valid and more satisfactory considered by users.
Related papers
- Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a Next Point of Interest Recommendation System [0.5796859155047135]
Some machine learning methods are inherently explainable, and thus are not completely black box.
This enables the developers to make sense of the output without a developing a complex and expensive explainability technique.
We demonstrate this philosophy/paradigm in STAN, a next Point of Interest recommendation system based on collaborative filtering and sequence prediction.
arXiv Detail & Related papers (2024-10-04T18:14:58Z) - Unlocking the Potential of Large Language Models for Explainable
Recommendations [55.29843710657637]
It remains uncertain what impact replacing the explanation generator with the recently emerging large language models (LLMs) would have.
In this study, we propose LLMXRec, a simple yet effective two-stage explainable recommendation framework.
By adopting several key fine-tuning techniques, controllable and fluent explanations can be well generated.
arXiv Detail & Related papers (2023-12-25T09:09:54Z) - Latent User Intent Modeling for Sequential Recommenders [92.66888409973495]
Sequential recommender models learn to predict the next items a user is likely to interact with based on his/her interaction history on the platform.
Most sequential recommenders however lack a higher-level understanding of user intents, which often drive user behaviors online.
Intent modeling is thus critical for understanding users and optimizing long-term user experience.
arXiv Detail & Related papers (2022-11-17T19:00:24Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Improving Sequential Query Recommendation with Immediate User Feedback [6.925738064847176]
We propose an algorithm for next query recommendation in interactive data exploration settings.
We conduct a large-scale experimental study using log files from a popular online literature discovery service.
arXiv Detail & Related papers (2022-05-12T18:19:24Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Remembering for the Right Reasons: Explanations Reduce Catastrophic
Forgetting [100.75479161884935]
We propose a novel training paradigm called Remembering for the Right Reasons (RRR)
RRR stores visual model explanations for each example in the buffer and ensures the model has "the right reasons" for its predictions.
We demonstrate how RRR can be easily added to any memory or regularization-based approach and results in reduced forgetting.
arXiv Detail & Related papers (2020-10-04T10:05:27Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Learning Post-Hoc Causal Explanations for Recommendation [43.300372759620664]
We propose to extract causal rules from the user interaction history as post-hoc explanations for the black-box sequential recommendation mechanisms.
Our approach achieves counterfactual examples with the aid of a perturbation model, and then extracts personalized causal relationships for the recommendation model.
Experiments are conducted on several state-of-the-art sequential recommendation models and real-world datasets.
arXiv Detail & Related papers (2020-06-30T17:14:12Z) - User Memory Reasoning for Conversational Recommendation [68.34475157544246]
We study a conversational recommendation model which dynamically manages users' past (offline) preferences and current (online) requests.
MGConvRex captures human-level reasoning over user memory and has disjoint training/testing sets of users for zero-shot (cold-start) reasoning for recommendation.
arXiv Detail & Related papers (2020-05-30T05:29:23Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.