Is More Always Better? The Effects of Personal Characteristics and Level
of Detail on the Perception of Explanations in a Recommender System
- URL: http://arxiv.org/abs/2304.00969v1
- Date: Mon, 3 Apr 2023 13:40:08 GMT
- Title: Is More Always Better? The Effects of Personal Characteristics and Level
of Detail on the Perception of Explanations in a Recommender System
- Authors: Mohamed Amine Chatti and Mouadh Guesmi and Laura Vorgerd and Thao Ngo
and Shoeb Joarder and Qurat Ul Ain and Arham Muslim
- Abstract summary: We aim in this paper at a shift from a one-size-fits-all to a personalized approach to explainable recommendation.
We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations.
Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type.
- Score: 1.1545092788508224
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Despite the acknowledgment that the perception of explanations may vary
considerably between end-users, explainable recommender systems (RS) have
traditionally followed a one-size-fits-all model, whereby the same explanation
level of detail is provided to each user, without taking into consideration
individual user's context, i.e., goals and personal characteristics. To fill
this research gap, we aim in this paper at a shift from a one-size-fits-all to
a personalized approach to explainable recommendation by giving users agency in
deciding which explanation they would like to see. We developed a transparent
Recommendation and Interest Modeling Application (RIMA) that provides on-demand
personalized explanations of the recommendations, with three levels of detail
(basic, intermediate, advanced) to meet the demands of different types of
end-users. We conducted a within-subject study (N=31) to investigate the
relationship between user's personal characteristics and the explanation level
of detail, and the effects of these two variables on the perception of the
explainable RS with regard to different explanation goals. Our results show
that the perception of explainable RS with different levels of detail is
affected to different degrees by the explanation goal and user type.
Consequently, we suggested some theoretical and design guidelines to support
the systematic design of explanatory interfaces in RS tailored to the user's
context.
Related papers
- Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System [0.0]
We identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency.
Our study shows that the choice of the explanation intelligibility types depends on the explanation goal and user type.
arXiv Detail & Related papers (2023-05-26T15:40:46Z) - Explainable Recommender with Geometric Information Bottleneck [25.703872435370585]
We propose to incorporate a geometric prior learnt from user-item interactions into a variational network.
Latent factors from an individual user-item pair can be used for both recommendation and explanation generation.
Experimental results on three e-commerce datasets show that our model significantly improves the interpretability of a variational recommender.
arXiv Detail & Related papers (2023-05-09T10:38:36Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Not all users are the same: Providing personalized explanations for
sequential decision making problems [25.24098967133101]
This work proposes an end-to-end adaptive explanation generation system.
It begins by learning the different types of users that the agent could interact with.
It is then tasked with identifying the type on the fly and adjust its explanations accordingly.
arXiv Detail & Related papers (2021-06-23T07:46:19Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - A Neural Topical Expansion Framework for Unstructured Persona-oriented
Dialogue Generation [52.743311026230714]
Persona Exploration and Exploitation (PEE) is able to extend the predefined user persona description with semantically correlated content.
PEE consists of two main modules: persona exploration and persona exploitation.
Our approach outperforms state-of-the-art baselines in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2020-02-06T08:24:33Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.