Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a Next Point of Interest Recommendation System
- URL: http://arxiv.org/abs/2410.03841v1
- Date: Fri, 4 Oct 2024 18:14:58 GMT
- Title: Explaining the (Not So) Obvious: Simple and Fast Explanation of STAN, a Next Point of Interest Recommendation System
- Authors: Fajrian Yunus, Talel Abdessalem,
- Abstract summary: Some machine learning methods are inherently explainable, and thus are not completely black box.
This enables the developers to make sense of the output without a developing a complex and expensive explainability technique.
We demonstrate this philosophy/paradigm in STAN, a next Point of Interest recommendation system based on collaborative filtering and sequence prediction.
- Score: 0.5796859155047135
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A lot of effort in recent years have been expended to explain machine learning systems. However, some machine learning methods are inherently explainable, and thus are not completely black box. This enables the developers to make sense of the output without a developing a complex and expensive explainability technique. Besides that, explainability should be tailored to suit the context of the problem. In a recommendation system which relies on collaborative filtering, the recommendation is based on the behaviors of similar users, therefore the explanation should tell which other users are similar to the current user. Similarly, if the recommendation system is based on sequence prediction, the explanation should also tell which input timesteps are the most influential. We demonstrate this philosophy/paradigm in STAN (Spatio-Temporal Attention Network for Next Location Recommendation), a next Point of Interest recommendation system based on collaborative filtering and sequence prediction. We also show that the explanation helps to "debug" the output.
Related papers
- Review of Explainable Graph-Based Recommender Systems [2.1711205684359247]
This review paper discusses state-of-the-art approaches of explainable graph-based recommender systems.
It categorizes them based on three aspects: learning methods, explaining methods, and explanation types.
arXiv Detail & Related papers (2024-07-31T21:30:36Z) - Generating Context-Aware Contrastive Explanations in Rule-based Systems [2.497044167437633]
We present an approach that predicts a potential contrastive event in situations where a user asks for an explanation in the context of rule-based systems.
Our approach analyzes a situation that needs to be explained and then selects the most likely rule a user may have expected instead of what the user has observed.
This contrastive event is then used to create a contrastive explanation that is presented to the user.
arXiv Detail & Related papers (2024-02-20T13:31:12Z) - Learning to Counterfactually Explain Recommendations [14.938252589829673]
We propose a learning-based framework to generate counterfactual explanations.
To generate an explanation, we find the history subset predicted by the surrogate model that is most likely to remove the recommendation.
arXiv Detail & Related papers (2022-11-17T18:21:21Z) - Reinforced Path Reasoning for Counterfactual Explainable Recommendation [10.36395995374108]
We propose a novel Counterfactual Explainable Recommendation (CERec) to generate item attribute-based counterfactual explanations.
We reduce the huge search space with an adaptive path sampler by using rich context information of a given knowledge graph.
arXiv Detail & Related papers (2022-07-14T05:59:58Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - EXPLAINABOARD: An Explainable Leaderboard for NLP [69.59340280972167]
ExplainaBoard is a new conceptualization and implementation of NLP evaluation.
It allows researchers to (i) diagnose strengths and weaknesses of a single system and (ii) interpret relationships between multiple systems.
arXiv Detail & Related papers (2021-04-13T17:45:50Z) - This is not the Texture you are looking for! Introducing Novel
Counterfactual Explanations for Non-Experts using Generative Adversarial
Learning [59.17685450892182]
counterfactual explanation systems try to enable a counterfactual reasoning by modifying the input image.
We present a novel approach to generate such counterfactual image explanations based on adversarial image-to-image translation techniques.
Our results show that our approach leads to significantly better results regarding mental models, explanation satisfaction, trust, emotions, and self-efficacy than two state-of-the art systems.
arXiv Detail & Related papers (2020-12-22T10:08:05Z) - Fairness-Aware Explainable Recommendation over Knowledge Graphs [73.81994676695346]
We analyze different groups of users according to their level of activity, and find that bias exists in recommendation performance between different groups.
We show that inactive users may be more susceptible to receiving unsatisfactory recommendations, due to insufficient training data for the inactive users.
We propose a fairness constrained approach via re-ranking to mitigate this problem in the context of explainable recommendation over knowledge graphs.
arXiv Detail & Related papers (2020-06-03T05:04:38Z) - An Information-Theoretic Approach to Personalized Explainable Machine
Learning [92.53970625312665]
We propose a simple probabilistic model for the predictions and user knowledge.
We quantify the effect of an explanation by the conditional mutual information between the explanation and prediction.
arXiv Detail & Related papers (2020-03-01T13:06:29Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.