Explainable Recommendation via Interpretable Feature Mapping and
Evaluation of Explainability
- URL: http://arxiv.org/abs/2007.06133v1
- Date: Sun, 12 Jul 2020 23:49:12 GMT
- Title: Explainable Recommendation via Interpretable Feature Mapping and
Evaluation of Explainability
- Authors: Deng Pan, Xiangrui Li, Xin Li and Dongxiao Zhu
- Abstract summary: Experimental results demonstrate a strong performance in both recommendation and explaining explanation, eliminating the need for metadata.
We present a novel feature mapping approach that maps the uninterpretable general features onto the interpretable aspect features.
- Score: 22.58823484394866
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Latent factor collaborative filtering (CF) has been a widely used technique
for recommender system by learning the semantic representations of users and
items. Recently, explainable recommendation has attracted much attention from
research community. However, trade-off exists between explainability and
performance of the recommendation where metadata is often needed to alleviate
the dilemma. We present a novel feature mapping approach that maps the
uninterpretable general features onto the interpretable aspect features,
achieving both satisfactory accuracy and explainability in the recommendations
by simultaneous minimization of rating prediction loss and interpretation loss.
To evaluate the explainability, we propose two new evaluation metrics
specifically designed for aspect-level explanation using surrogate ground
truth. Experimental results demonstrate a strong performance in both
recommendation and explaining explanation, eliminating the need for metadata.
Code is available from https://github.com/pd90506/AMCF.
Related papers
- Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Studying How to Efficiently and Effectively Guide Models with Explanations [52.498055901649025]
'Model guidance' is the idea of regularizing the models' explanations to ensure that they are "right for the right reasons"
We conduct an in-depth evaluation across various loss functions, attribution methods, models, and 'guidance depths' on the PASCAL VOC 2007 and MS COCO 2014 datasets.
Specifically, we guide the models via bounding box annotations, which are much cheaper to obtain than the commonly used segmentation masks.
arXiv Detail & Related papers (2023-03-21T15:34:50Z) - Explainer Divergence Scores (EDS): Some Post-Hoc Explanations May be
Effective for Detecting Unknown Spurious Correlations [4.223964614888875]
Post-hoc explainers might be ineffective for detecting spurious correlations in Deep Neural Networks (DNNs)
We show there are serious weaknesses with the existing evaluation frameworks for this setting.
We propose a new evaluation methodology, Explainer Divergence Scores (EDS), grounded in an information theory approach to evaluate explainers.
arXiv Detail & Related papers (2022-11-14T15:52:21Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Coalitional Bayesian Autoencoders -- Towards explainable unsupervised
deep learning [78.60415450507706]
We show that explanations of BAE's predictions suffer from high correlation resulting in misleading explanations.
To alleviate this, a "Coalitional BAE" is proposed, which is inspired by agent-based system theory.
Our experiments on publicly available condition monitoring datasets demonstrate the improved quality of explanations using the Coalitional BAE.
arXiv Detail & Related papers (2021-10-19T15:07:09Z) - Dive into Ambiguity: Latent Distribution Mining and Pairwise Uncertainty
Estimation for Facial Expression Recognition [59.52434325897716]
We propose a solution, named DMUE, to address the problem of annotation ambiguity from two perspectives.
For the former, an auxiliary multi-branch learning framework is introduced to better mine and describe the latent distribution in the label space.
For the latter, the pairwise relationship of semantic feature between instances are fully exploited to estimate the ambiguity extent in the instance space.
arXiv Detail & Related papers (2021-04-01T03:21:57Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - Evaluations and Methods for Explanation through Robustness Analysis [117.7235152610957]
We establish a novel set of evaluation criteria for such feature based explanations by analysis.
We obtain new explanations that are loosely necessary and sufficient for a prediction.
We extend the explanation to extract the set of features that would move the current prediction to a target class.
arXiv Detail & Related papers (2020-05-31T05:52:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.