Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions
- URL: http://arxiv.org/abs/2012.08196v1
- Date: Tue, 15 Dec 2020 10:29:12 GMT
- Title: Explainable Recommendation Systems by Generalized Additive Models with
Manifest and Latent Interactions
- Authors: Yifeng Guo, Yu Su, Zebin Yang and Aijun Zhang
- Abstract summary: We propose the explainable recommendation systems based on a generalized additive model with manifest and latent interactions.
A new Python package GAMMLI is developed for efficient model training and visualized interpretation of the results.
- Score: 3.022014732234611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In recent years, the field of recommendation systems has attracted increasing
attention to developing predictive models that provide explanations of why an
item is recommended to a user. The explanations can be either obtained by
post-hoc diagnostics after fitting a relatively complex model or embedded into
an intrinsically interpretable model. In this paper, we propose the explainable
recommendation systems based on a generalized additive model with manifest and
latent interactions (GAMMLI). This model architecture is intrinsically
interpretable, as it additively consists of the user and item main effects, the
manifest user-item interactions based on observed features, and the latent
interaction effects from residuals. Unlike conventional collaborative filtering
methods, the group effect of users and items are considered in GAMMLI. It is
beneficial for enhancing the model interpretability, and can also facilitate
the cold-start recommendation problem. A new Python package GAMMLI is developed
for efficient model training and visualized interpretation of the results. By
numerical experiments based on simulation data and real-world cases, the
proposed method is shown to have advantages in both predictive performance and
explainable recommendation.
Related papers
- Active inference and artificial reasoning [36.949648744325046]
This technical note considers the sampling of outcomes that provide the greatest amount of information about the structure of underlying world models.<n>We focus on the sample efficiency afforded by seeking outcomes that resolve the greatest uncertainty about the world model.
arXiv Detail & Related papers (2025-12-24T11:59:36Z) - Model-agnostic post-hoc explainability for recommender systems [0.3437656066916039]
We develop a systematic application, adaptation, and evaluation of deletion diagnostics in the recommender setting.<n>The method compares the performance of a model to that of a similar model trained without a specific user or item, allowing us to quantify how that observation influences the recommender.<n>To demonstrate its model-agnostic nature, the proposal is applied to both Neural Collaborative Filtering (NCF), a widely used deep learning-based recommender, and Singular Value Decomposition (SVD), a classical collaborative filtering technique.
arXiv Detail & Related papers (2025-09-12T13:43:16Z) - ELIXIR: Efficient and LIghtweight model for eXplaIning Recommendations [1.9711529297777448]
We propose ELIXIR, a multi-task model combining rating prediction with personalized review generation.<n>ELIXIR jointly learns global and aspect-specific representations of users and items, optimizing overall rating, aspect-level ratings, and review generation.<n>Based on a T5-small (60M) model, we demonstrate the effectiveness of our aspect-based architecture in guiding text generation in a personalized context.
arXiv Detail & Related papers (2025-08-27T23:01:11Z) - Contrastive Learning for Cold Start Recommendation with Adaptive Feature Fusion [2.2194815687410627]
This paper proposes a cold start recommendation model that integrates contrastive learning.
The model dynamically adjusts the weights of key features through an adaptive feature selection module.
It integrates user attributes, item meta-information, and contextual features by combining a multimodal feature fusion mechanism.
arXiv Detail & Related papers (2025-02-05T23:15:31Z) - MixRec: Heterogeneous Graph Collaborative Filtering [21.96510707666373]
We present a graph collaborative filtering model MixRec to disentangling users' multi-behavior interaction patterns.
Our model achieves this by incorporating intent disentanglement and multi-behavior modeling.
We also introduce a novel contrastive learning paradigm that adaptively explores the advantages of self-supervised data augmentation.
arXiv Detail & Related papers (2024-12-18T13:12:36Z) - Interpret the Internal States of Recommendation Model with Sparse Autoencoder [26.021277330699963]
RecSAE is an automatic, generalizable probing method for interpreting the internal states of Recommendation models.
We train an autoencoder with sparsity constraints to reconstruct internal activations of recommendation models.
We automated the construction of concept dictionaries based on the relationship between latent activations and input item sequences.
arXiv Detail & Related papers (2024-11-09T08:22:31Z) - Estimating Causal Effects from Learned Causal Networks [56.14597641617531]
We propose an alternative paradigm for answering causal-effect queries over discrete observable variables.
We learn the causal Bayesian network and its confounding latent variables directly from the observational data.
We show that this emphmodel completion learning approach can be more effective than estimand approaches.
arXiv Detail & Related papers (2024-08-26T08:39:09Z) - Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - Causal Disentangled Variational Auto-Encoder for Preference
Understanding in Recommendation [50.93536377097659]
This paper introduces the Causal Disentangled Variational Auto-Encoder (CaD-VAE), a novel approach for learning causal disentangled representations from interaction data in recommender systems.
The approach utilizes structural causal models to generate causal representations that describe the causal relationship between latent factors.
arXiv Detail & Related papers (2023-04-17T00:10:56Z) - Multidimensional Item Response Theory in the Style of Collaborative
Filtering [0.8057006406834467]
This paper presents a machine learning approach to multidimensional item response theory (MIRT)
Inspired by collaborative filtering, we define a general class of models that includes many MIRT models.
We discuss the use of penalized joint maximum likelihood (JML) to estimate individual models and cross-validation to select the best performing model.
arXiv Detail & Related papers (2023-01-03T00:56:27Z) - Ordinal Graph Gamma Belief Network for Social Recommender Systems [54.9487910312535]
We develop a hierarchical Bayesian model termed ordinal graph factor analysis (OGFA), which jointly models user-item and user-user interactions.
OGFA not only achieves good recommendation performance, but also extracts interpretable latent factors corresponding to representative user preferences.
We extend OGFA to ordinal graph gamma belief network, which is a multi-stochastic-layer deep probabilistic model.
arXiv Detail & Related papers (2022-09-12T09:19:22Z) - MACE: An Efficient Model-Agnostic Framework for Counterfactual
Explanation [132.77005365032468]
We propose a novel framework of Model-Agnostic Counterfactual Explanation (MACE)
In our MACE approach, we propose a novel RL-based method for finding good counterfactual examples and a gradient-less descent method for improving proximity.
Experiments on public datasets validate the effectiveness with better validity, sparsity and proximity.
arXiv Detail & Related papers (2022-05-31T04:57:06Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Feature Interaction Interpretability: A Case for Explaining
Ad-Recommendation Systems via Neural Interaction Detection [14.37985060340549]
We propose a method to both interpret and augment the predictions of black-box recommender systems.
By not assuming the structure of the recommender system, our approach can be used in general settings.
arXiv Detail & Related papers (2020-06-19T05:14:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.