Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System
- URL: http://arxiv.org/abs/2305.17034v1
- Date: Fri, 26 May 2023 15:40:46 GMT
- Title: Justification vs. Transparency: Why and How Visual Explanations in a
Scientific Literature Recommender System
- Authors: Mouadh Guesmi and Mohamed Amine Chatti and Shoeb Joarder and Qurat Ul
Ain and Clara Siepmann and Hoda Ghanbarzadeh and Rawaa Alatrash
- Abstract summary: We identify relationships between Why and How explanation intelligibility types and the explanation goals of justification and transparency.
Our study shows that the choice of the explanation intelligibility types depends on the explanation goal and user type.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Significant attention has been paid to enhancing recommender systems (RS)
with explanation facilities to help users make informed decisions and increase
trust in and satisfaction with the RS. Justification and transparency represent
two crucial goals in explainable recommendation. Different from transparency,
which faithfully exposes the reasoning behind the recommendation mechanism,
justification conveys a conceptual model that may differ from that of the
underlying algorithm. An explanation is an answer to a question. In explainable
recommendation, a user would want to ask questions (referred to as
intelligibility types) to understand results given by the RS. In this paper, we
identify relationships between Why and How explanation intelligibility types
and the explanation goals of justification and transparency. We followed the
Human-Centered Design (HCD) approach and leveraged the What-Why-How
visualization framework to systematically design and implement Why and How
visual explanations in the transparent Recommendation and Interest Modeling
Application (RIMA). Furthermore, we conducted a qualitative user study (N=12)
to investigate the potential effects of providing Why and How explanations
together in an explainable RS on the users' perceptions regarding transparency,
trust, and satisfaction. Our study showed qualitative evidence confirming that
the choice of the explanation intelligibility types depends on the explanation
goal and user type.
Related papers
- Explainability for Transparent Conversational Information-Seeking [13.790574266700006]
This study explores different methods of explaining the responses.
By exploring transparency across explanation type, quality, and presentation mode, this research aims to bridge the gap between system-generated responses and responses verifiable by the user.
arXiv Detail & Related papers (2024-05-06T09:25:14Z) - Understanding Before Recommendation: Semantic Aspect-Aware Review Exploitation via Large Language Models [53.337728969143086]
Recommendation systems harness user-item interactions like clicks and reviews to learn their representations.
Previous studies improve recommendation accuracy and interpretability by modeling user preferences across various aspects and intents.
We introduce a chain-based prompting approach to uncover semantic aspect-aware interactions.
arXiv Detail & Related papers (2023-12-26T15:44:09Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - Dynamic Clue Bottlenecks: Towards Interpretable-by-Design Visual Question Answering [58.64831511644917]
We introduce an interpretable by design model that factors model decisions into intermediate human-legible explanations.
We show that our inherently interpretable system can improve 4.64% over a comparable black-box system in reasoning-focused questions.
arXiv Detail & Related papers (2023-05-24T08:33:15Z) - Is More Always Better? The Effects of Personal Characteristics and Level
of Detail on the Perception of Explanations in a Recommender System [1.1545092788508224]
We aim in this paper at a shift from a one-size-fits-all to a personalized approach to explainable recommendation.
We developed a transparent Recommendation and Interest Modeling Application (RIMA) that provides on-demand personalized explanations.
Our results show that the perception of explainable RS with different levels of detail is affected to different degrees by the explanation goal and user type.
arXiv Detail & Related papers (2023-04-03T13:40:08Z) - From Intrinsic to Counterfactual: On the Explainability of
Contextualized Recommender Systems [43.93801836660617]
We show that by utilizing the contextual features (e.g., item reviews from users), we can design a series of explainable recommender systems.
We propose three types of explainable recommendation strategies with gradual change of model transparency: whitebox, graybox, and blackbox.
Our model achieves highly competitive ranking performance, and generates accurate and effective explanations in terms of numerous quantitative metrics and qualitative visualizations.
arXiv Detail & Related papers (2021-10-28T01:54:04Z) - Counterfactual Explainable Recommendation [22.590877963169103]
We propose Counterfactual Explainable Recommendation (CountER), which takes the insights of counterfactual reasoning from causal inference for explainable recommendation.
CountER seeks simple (low complexity) and effective (high strength) explanations for the model decision.
Results show that our model generates more accurate and effective explanations than state-of-the-art explainable recommendation models.
arXiv Detail & Related papers (2021-08-24T06:37:57Z) - Dimensions of Transparency in NLP Applications [64.16277166331298]
Broader transparency in descriptions of and communication regarding AI systems is widely considered desirable.
Previous work has suggested that a trade-off exists between greater system transparency and user confusion.
arXiv Detail & Related papers (2021-01-02T11:46:17Z) - Uncertainty as a Form of Transparency: Measuring, Communicating, and
Using Uncertainty [66.17147341354577]
We argue for considering a complementary form of transparency by estimating and communicating the uncertainty associated with model predictions.
We describe how uncertainty can be used to mitigate model unfairness, augment decision-making, and build trustworthy systems.
This work constitutes an interdisciplinary review drawn from literature spanning machine learning, visualization/HCI, design, decision-making, and fairness.
arXiv Detail & Related papers (2020-11-15T17:26:14Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.