Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System
- URL: http://arxiv.org/abs/2306.05809v3
- Date: Wed, 18 Oct 2023 15:36:08 GMT
- Title: Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System
- Authors: Mouadh Guesmi and Mohamed Amine Chatti and Shoeb Joarder and Qurat Ul
Ain and Rawaa Alatrash and Clara Siepmann and Tannaz Vahidi
- Abstract summary: We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
- Score: 0.5937476291232802
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Explainable recommender systems (RS) have traditionally followed a
one-size-fits-all approach, delivering the same explanation level of detail to
each user, without considering their individual needs and goals. Further,
explanations in RS have so far been presented mostly in a static and
non-interactive manner. To fill these research gaps, we aim in this paper to
adopt a user-centered, interactive explanation model that provides explanations
with different levels of detail and empowers users to interact with, control,
and personalize the explanations based on their needs and preferences. We
followed a user-centered approach to design interactive explanations with three
levels of detail (basic, intermediate, and advanced) and implemented them in
the transparent Recommendation and Interest Modeling Application (RIMA). We
conducted a qualitative user study (N=14) to investigate the impact of
providing interactive explanations with varying level of details on the users'
perception of the explainable RS. Our study showed qualitative evidence that
fostering interaction and giving users control in deciding which explanation
they would like to see can meet the demands of users with different needs,
preferences, and goals, and consequently can have positive effects on different
crucial aspects in explainable recommendation, including transparency, trust,
satisfaction, and user experience.
Related papers
- QAGCF: Graph Collaborative Filtering for Q&A Recommendation [58.21387109664593]
Question and answer (Q&A) platforms usually recommend question-answer pairs to meet users' knowledge acquisition needs.
This makes user behaviors more complex, and presents two challenges for Q&A recommendation.
We introduce Question & Answer Graph Collaborative Filtering (QAGCF), a graph neural network model that creates separate graphs for collaborative and semantic views.
arXiv Detail & Related papers (2024-06-07T10:52:37Z) - Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI [0.6333053895057925]
This paper explores how different types of explanations collaboratively meet users' XAI needs.
We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences.
The Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface.
arXiv Detail & Related papers (2024-05-16T21:13:43Z) - Rethinking the Evaluation of Dialogue Systems: Effects of User Feedback on Crowdworkers and LLMs [57.16442740983528]
In ad-hoc retrieval, evaluation relies heavily on user actions, including implicit feedback.
The role of user feedback in annotators' assessment of turns in a conversational perception has been little studied.
We focus on how the evaluation of task-oriented dialogue systems ( TDSs) is affected by considering user feedback, explicit or implicit, as provided through the follow-up utterance of a turn being evaluated.
arXiv Detail & Related papers (2024-04-19T16:45:50Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Notion of Explainable Artificial Intelligence -- An Empirical
Investigation from A Users Perspective [0.3069335774032178]
This study aims to investigate usercentric explainable AI and considered recommendation systems as the study context.
We conducted focus group interviews to collect qualitative data on the recommendation system.
Our findings reveal that end users want a non-technical and tailor-made explanation with on-demand supplementary information.
arXiv Detail & Related papers (2023-11-01T22:20:14Z) - Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System [0.0]
We identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency.
Our study shows that the choice of the explanation intelligibility types depends on the explanation goal and user type.
arXiv Detail & Related papers (2023-05-26T15:40:46Z) - Continually Improving Extractive QA via Human Feedback [59.49549491725224]
We study continually improving an extractive question answering (QA) system via human user feedback.
We conduct experiments involving thousands of user interactions under diverse setups to broaden the understanding of learning from feedback over time.
arXiv Detail & Related papers (2023-05-21T14:35:32Z) - Is More Always Better? The Effects of Personal Characteristics and Level
of Detail on the Perception of Explanations in a Recommender System [1.1545092788508224]
We aim in this paper at a shift from a one-size-fits-all to a personalized approach to explainable recommendation.
We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations.
Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type.
arXiv Detail & Related papers (2023-04-03T13:40:08Z) - Justification of Recommender Systems Results: A Service-based Approach [4.640835690336653]
We propose a novel justification approach that uses service models to extract experience data from reviews concerning all the stages of interaction with items.
In a user study, we compared our approach with baselines reflecting the state of the art in the justification of recommender systems results.
Our models received higher Interface Adequacy and Satisfaction evaluations by users having different levels of Curiosity or low Need for Cognition (NfC)
These findings encourage the adoption of service models to justify recommender systems results but suggest the investigation of personalization strategies to suit diverse interaction needs.
arXiv Detail & Related papers (2022-11-07T11:08:19Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z) - Hybrid Deep Embedding for Recommendations with Dynamic Aspect-Level
Explanations [60.78696727039764]
We propose a novel model called Hybrid Deep Embedding for aspect-based explainable recommendations.
The main idea of HDE is to learn the dynamic embeddings of users and items for rating prediction.
As the aspect preference/quality of users/items is learned automatically, HDE is able to capture the impact of aspects that are not mentioned in reviews of a user or an item.
arXiv Detail & Related papers (2020-01-18T13:16:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.