Explainable Recommender Systems via Resolving Learning Representations
- URL: http://arxiv.org/abs/2008.09316v1
- Date: Fri, 21 Aug 2020 05:30:48 GMT
- Title: Explainable Recommender Systems via Resolving Learning Representations
- Authors: Ninghao Liu, Yong Ge, Li Li, Xia Hu, Rui Chen, Soo-Hyun Choi
- Abstract summary: Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
- Score: 57.24565012731325
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recommender systems play a fundamental role in web applications in filtering
massive information and matching user interests. While many efforts have been
devoted to developing more effective models in various scenarios, the
exploration on the explainability of recommender systems is running behind.
Explanations could help improve user experience and discover system defects. In
this paper, after formally introducing the elements that are related to model
explainability, we propose a novel explainable recommendation model through
improving the transparency of the representation learning process.
Specifically, to overcome the representation entangling problem in traditional
models, we revise traditional graph convolution to discriminate information
from different layers. Also, each representation vector is factorized into
several segments, where each segment relates to one semantic aspect in data.
Different from previous work, in our model, factor discovery and representation
learning are simultaneously conducted, and we are able to handle extra
attribute information and knowledge. In this way, the proposed model can learn
interpretable and meaningful representations for users and items. Unlike
traditional methods that need to make a trade-off between explainability and
effectiveness, the performance of our proposed explainable model is not
negatively affected after considering explainability. Finally, comprehensive
experiments are conducted to validate the performance of our model as well as
explanation faithfulness.
Related papers
- Explanatory Model Monitoring to Understand the Effects of Feature Shifts on Performance [61.06245197347139]
We propose a novel approach to explain the behavior of a black-box model under feature shifts.
We refer to our method that combines concepts from Optimal Transport and Shapley Values as Explanatory Performance Estimation.
arXiv Detail & Related papers (2024-08-24T18:28:19Z) - CNN-based explanation ensembling for dataset, representation and explanations evaluation [1.1060425537315088]
We explore the potential of ensembling explanations generated by deep classification models using convolutional model.
Through experimentation and analysis, we aim to investigate the implications of combining explanations to uncover a more coherent and reliable patterns of the model's behavior.
arXiv Detail & Related papers (2024-04-16T08:39:29Z) - Revisiting Self-supervised Learning of Speech Representation from a
Mutual Information Perspective [68.20531518525273]
We take a closer look into existing self-supervised methods of speech from an information-theoretic perspective.
We use linear probes to estimate the mutual information between the target information and learned representations.
We explore the potential of evaluating representations in a self-supervised fashion, where we estimate the mutual information between different parts of the data without using any labels.
arXiv Detail & Related papers (2024-01-16T21:13:22Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - MinT: Boosting Generalization in Mathematical Reasoning via Multi-View
Fine-Tuning [53.90744622542961]
Reasoning in mathematical domains remains a significant challenge for small language models (LMs)
We introduce a new method that exploits existing mathematical problem datasets with diverse annotation styles.
Experimental results show that our strategy enables a LLaMA-7B model to outperform prior approaches.
arXiv Detail & Related papers (2023-07-16T05:41:53Z) - Supervised Contrastive Learning for Affect Modelling [2.570570340104555]
We introduce three different supervised contrastive learning approaches for training representations that consider affect information.
Results demonstrate the representation capacity of contrastive learning and its efficiency in boosting the accuracy of affect models.
arXiv Detail & Related papers (2022-08-25T17:40:19Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Beyond Trivial Counterfactual Explanations with Diverse Valuable
Explanations [64.85696493596821]
In computer vision applications, generative counterfactual methods indicate how to perturb a model's input to change its prediction.
We propose a counterfactual method that learns a perturbation in a disentangled latent space that is constrained using a diversity-enforcing loss.
Our model improves the success rate of producing high-quality valuable explanations when compared to previous state-of-the-art methods.
arXiv Detail & Related papers (2021-03-18T12:57:34Z) - DECE: Decision Explorer with Counterfactual Explanations for Machine
Learning Models [36.50754934147469]
We exploit the potential of counterfactual explanations to understand and explore the behavior of machine learning models.
We design DECE, an interactive visualization system that helps understand and explore a model's decisions on individual instances and data subsets.
arXiv Detail & Related papers (2020-08-19T09:44:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.