From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems
- URL: http://arxiv.org/abs/2110.14844v1
- Date: Thu, 28 Oct 2021 01:54:04 GMT
- Title: From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems
- Authors: Yao Zhou, Haonan Wang, Jingrui He, Haixun Wang
- Abstract summary: We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
- Score: 43.93801836660617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the prevalence of deep learning based embedding approaches, recommender
systems have become a proven and indispensable tool in various information
filtering applications. However, many of them remain difficult to diagnose what
aspects of the deep models' input drive the final ranking decision, thus, they
cannot often be understood by human stakeholders. In this paper, we investigate
the dilemma between recommendation and explainability, and show that by
utilizing the contextual features (e.g., item reviews from users), we can
design a series of explainable recommender systems without sacrificing their
performance. In particular, we propose three types of explainable
recommendation strategies with gradual change of model transparency: whitebox,
graybox, and blackbox. Each strategy explains its ranking decisions via
different mechanisms: attention weights, adversarial perturbations, and
counterfactual perturbations. We apply these explainable models on five
real-world data sets under the contextualized setting where users and items
have explicit interactions. The empirical results show that our model achieves
highly competitive ranking performance, and generates accurate and effective
explanations in terms of numerous quantitative metrics and qualitative
visualizations.
Related papers
- Robust Explainable Recommendation [10.186029242664931]
We present a general framework for feature-aware explainable recommenders that can withstand external attacks.
Our framework is simple to implement and supports different methods regardless of the internal model structure and intrinsic utility within any model.
arXiv Detail & Related papers (2024-05-03T05:03:07Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Better Understanding Differences in Attribution Methods via Systematic Evaluations [57.35035463793008]
Post-hoc attribution methods have been proposed to identify image regions most influential to the models' decisions.
We propose three novel evaluation schemes to more reliably measure the faithfulness of those methods.
We use these evaluation schemes to study strengths and shortcomings of some widely used attribution methods over a wide range of models.
arXiv Detail & Related papers (2023-03-21T14:24:58Z) - Variational Distillation for Multi-View Learning [104.17551354374821]
We design several variational information bottlenecks to exploit two key characteristics for multi-view representation learning.
Under rigorously theoretical guarantee, our approach enables IB to grasp the intrinsic correlation between observations and semantic labels.
arXiv Detail & Related papers (2022-06-20T03:09:46Z) - Exploring the Trade-off between Plausibility, Change Intensity and
Adversarial Power in Counterfactual Explanations using Multi-objective
Optimization [73.89239820192894]
We argue that automated counterfactual generation should regard several aspects of the produced adversarial instances.
We present a novel framework for the generation of counterfactual examples.
arXiv Detail & Related papers (2022-05-20T15:02:53Z) - On the Objective Evaluation of Post Hoc Explainers [10.981508361941335]
Modern trends in machine learning research have led to algorithms that are increasingly intricate to the degree that they are considered to be black boxes.
In an effort to reduce the opacity of decisions, methods have been proposed to construe the inner workings of such models in a human-comprehensible manner.
We propose a framework for the evaluation of post hoc explainers on ground truth that is directly derived from the additive structure of a model.
arXiv Detail & Related papers (2021-06-15T19:06:51Z) - Making Neural Networks Interpretable with Attribution: Application to
Implicit Signals Prediction [11.427019313283997]
We propose a novel formulation of interpretable deep neural networks for the attribution task.
Using masked weights, hidden features can be deeply attributed, split into several input-restricted sub-networks and trained as a boosted mixture of experts.
arXiv Detail & Related papers (2020-08-26T06:46:49Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.