Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions
- URL: http://arxiv.org/abs/2012.08196v1
- Date: Tue, 15 Dec 2020 10:29:12 GMT
- Title: Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions
- Authors: Yifeng Guo, Yu Su, Zebin Yang and Aijun Zhang
- Abstract summary: We propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions.
A new Python package GAMMLI is developed for efficient model training and visualized interpretation of the results.
- Score: 3.022014732234611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the field of recommendation systems has attracted increasing
attention to developing predictive models that provide explanations of why an
item is recommended to a user. The explanations can be either obtained by
post-hoc diagnostics after fitting a relatively complex model or embedded into
an intrinsically interpretable model. In this paper, we propose the explainable
recommendation systems based on a generalized additive model with manifest and
latent interactions (GAMMLI). This model architecture is intrinsically
interpretable, as it additively consists of the user and item main effects, the
manifest user-item interactions based on observed features, and the latent
interaction effects from residuals. Unlike conventional collaborative filtering
methods, the group effect of users and items are considered in GAMMLI. It is
beneficial for enhancing the model interpretability, and can also facilitate
the cold-start recommendation problem. A new Python package GAMMLI is developed
for efficient model training and visualized interpretation of the results. By
numerical experiments based on simulation data and real-world cases, the
proposed method is shown to have advantages in both predictive performance and
explainable recommendation.
Related papers
- Increasing Performance And Sample Efficiency With Model-agnostic
Interactive Feature Attributions [3.0655581300025996]
We provide model-agnostic implementations for two popular explanation methods (Occlusion and Shapley values) to enforce entirely different attributions in the complex model.
We show how our proposed approach can significantly improve the model's performance only by augmenting its training dataset based on corrected explanations.
arXiv Detail & Related papers (2023-06-28T15:23:28Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - Multidimensional Item Response Theory in the Style of Collaborative
Filtering [0.8057006406834467]
This paper presents a machine learning approach to multidimensional item response theory (MIRT)
Inspired by collaborative filtering, we define a general class of models that includes many MIRT models.
We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model.
arXiv Detail & Related papers (2023-01-03T00:56:27Z) - Ordinal Graph Gamma Belief Network for Social Recommender Systems [54.9487910312535]
We develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.
OGFA not only achieves good recommendation performance, but also extracts interpretable latent factors corresponding to representative user preferences.
We extend OGFA to ordinal graph gamma belief network, which is a multi-stochastic-layer deep probabilistic model.
arXiv Detail & Related papers (2022-09-12T09:19:22Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Reinforcement Learning based Path Exploration for Sequential Explainable
Recommendation [57.67616822888859]
We propose a novel Temporal Meta-path Guided Explainable Recommendation leveraging Reinforcement Learning (TMER-RL)
TMER-RL utilizes reinforcement item-item path modelling between consecutive items with attention mechanisms to sequentially model dynamic user-item evolutions on dynamic knowledge graph for explainable recommendation.
Extensive evaluations of TMER on two real-world datasets show state-of-the-art performance compared against recent strong baselines.
arXiv Detail & Related papers (2021-11-24T04:34:26Z) - Deep Variational Models for Collaborative Filtering-based Recommender
Systems [63.995130144110156]
Deep learning provides accurate collaborative filtering models to improve recommender system results.
Our proposed models apply the variational concept to injectity in the latent space of the deep architecture.
Results show the superiority of the proposed approach in scenarios where the variational enrichment exceeds the injected noise effect.
arXiv Detail & Related papers (2021-07-27T08:59:39Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Feature Interaction Interpretability: A Case for Explaining
Ad-Recommendation Systems via Neural Interaction Detection [14.37985060340549]
We propose a method to both interpret and augment the predictions of black-box recommender systems.
By not assuming the structure of the recommender system, our approach can be used in general settings.
arXiv Detail & Related papers (2020-06-19T05:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.