Learning Post-Hoc Causal Explanations for Recommendation
- URL: http://arxiv.org/abs/2006.16977v2
- Date: Tue, 23 Feb 2021 17:32:32 GMT
- Title: Learning Post-Hoc Causal Explanations for Recommendation
- Authors: Shuyuan Xu, Yunqi Li, Shuchang Liu, Zuohui Fu, Xu Chen, Yongfeng Zhang
- Abstract summary: We propose to extract causal rules from the user interaction history as post-hoc explanations for the black-box sequential recommendation mechanisms.
Our approach achieves counterfactual examples with the aid of a perturbation model, and then extracts personalized causal relationships for the recommendation model.
Experiments are conducted on several state-of-the-art sequential recommendation models and real-world datasets.
- Score: 43.300372759620664
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: State-of-the-art recommender systems have the ability to generate
high-quality recommendations, but usually cannot provide intuitive explanations
to humans due to the usage of black-box prediction models. The lack of
transparency has highlighted the critical importance of improving the
explainability of recommender systems. In this paper, we propose to extract
causal rules from the user interaction history as post-hoc explanations for the
black-box sequential recommendation mechanisms, whilst maintain the predictive
accuracy of the recommendation model. Our approach firstly achieves
counterfactual examples with the aid of a perturbation model, and then extracts
personalized causal relationships for the recommendation model through a causal
rule mining algorithm. Experiments are conducted on several state-of-the-art
sequential recommendation models and real-world datasets to verify the
performance of our model on generating causal explanations. Meanwhile, We
evaluate the discovered causal explanations in terms of quality and fidelity,
which show that compared with conventional association rules, causal rules can
provide personalized and more effective explanations for the behavior of
black-box recommendation models.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - CausPref: Causal Preference Learning for Out-of-Distribution
Recommendation [36.22965012642248]
The current recommender system is still vulnerable to the distribution shift of users and items in realistic scenarios.
We propose to incorporate the recommendation-specific DAG learner into a novel causal preference-based recommendation framework named CausPref.
Our approach surpasses the benchmark models significantly under types of out-of-distribution settings.
arXiv Detail & Related papers (2022-02-08T16:42:03Z) - Deep Causal Reasoning for Recommendations [47.83224399498504]
A new trend in recommender system research is to negate the influence of confounders from a causal perspective.
We model the recommendation as a multi-cause multi-outcome (MCMO) inference problem.
We show that MCMO modeling may lead to high variance due to scarce observations associated with the high-dimensional causal space.
arXiv Detail & Related papers (2022-01-06T15:00:01Z) - Debiased Explainable Pairwise Ranking from Implicit Feedback [0.3867363075280543]
We focus on the state of the art pairwise ranking model, Bayesian Personalized Ranking (BPR)
BPR is a black box model that does not explain its outputs, thus limiting the user's trust in the recommendations.
We propose a novel explainable loss function and a corresponding Matrix Factorization-based model that generates recommendations along with item-based explanations.
arXiv Detail & Related papers (2021-07-30T17:19:37Z) - Fast Multi-Step Critiquing for VAE-based Recommender Systems [27.207067974031805]
We present M&Ms-VAE, a novel variational autoencoder for recommendation and explanation.
We train the model under a weak supervision scheme to simulate both fully and partially observed variables.
We then leverage the generalization ability of a trained M&Ms-VAE model to embed the user preference and the critique separately.
arXiv Detail & Related papers (2021-05-03T12:26:09Z) - Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions [3.022014732234611]
We propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions.
A new Python package GAMMLI is developed for efficient model training and visualized interpretation of the results.
arXiv Detail & Related papers (2020-12-15T10:29:12Z) - Model extraction from counterfactual explanations [68.8204255655161]
We show how an adversary can leverage the information provided by counterfactual explanations to build high-fidelity and high-accuracy model extraction attacks.
Our attack enables the adversary to build a faithful copy of a target model by accessing its counterfactual explanations.
arXiv Detail & Related papers (2020-09-03T19:02:55Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.