How Does Users' App Knowledge Influence the Preferred Level of Detail and Format of Software Explanations?
- URL: http://arxiv.org/abs/2502.06549v1
- Date: Mon, 10 Feb 2025 15:18:04 GMT
- Title: How Does Users' App Knowledge Influence the Preferred Level of Detail and Format of Software Explanations?
- Authors: Martin Obaidi, Jannik Fischbach, Marc Herrmann, Hannah Deters, Jakob Droste, Jil Klünder, Kurt Schneider,
- Abstract summary: This study investigates factors influencing users' preferred level of detail and the form of an explanation in software.<n>Results indicate that users prefer moderately detailed explanations in short text formats.<n>Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors.
- Score: 2.423517761302909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context and Motivation: Due to their increasing complexity, everyday software systems are becoming increasingly opaque for users. A frequently adopted method to address this difficulty is explainability, which aims to make systems more understandable and usable. Question/problem: However, explanations can also lead to unnecessary cognitive load. Therefore, adapting explanations to the actual needs of a user is a frequently faced challenge. Principal ideas/results: This study investigates factors influencing users' preferred the level of detail and the form of an explanation (e.g., short text or video tutorial) in software. We conducted an online survey with 58 participants to explore relationships between demographics, software usage, app-specific knowledge, as well as their preferred explanation form and level of detail. The results indicate that users prefer moderately detailed explanations in short text formats. Correlation analyses revealed no relationship between app-specific knowledge and the preferred level of detail of an explanation, but an influence of demographic aspects (like gender) on app-specific knowledge and its impact on application confidence were observed, pointing to a possible mediated relationship between knowledge and preferences for explanations. Contribution: Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors, supporting the development of adaptive explanation systems tailored to user expertise. These findings support requirements analysis processes by highlighting important factors that should be considered in user-centered methods such as personas.
Related papers
- Do Users' Explainability Needs in Software Change with Mood? [2.42509778995617]
We investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation.<n>We conclude that the need for explanations is very subjective and does only partially depend on objective factors.
arXiv Detail & Related papers (2025-02-10T15:12:41Z) - QAGCF: Graph Collaborative Filtering for Q&A Recommendation [58.21387109664593]
Question and answer (Q&A) platforms usually recommend question-answer pairs to meet users' knowledge acquisition needs.
This makes user behaviors more complex, and presents two challenges for Q&A recommendation.
We introduce Question & Answer Graph Collaborative Filtering (QAGCF), a graph neural network model that creates separate graphs for collaborative and semantic views.
arXiv Detail & Related papers (2024-06-07T10:52:37Z) - Tell me more: Intent Fulfilment Framework for Enhancing User Experiences in Conversational XAI [0.6333053895057925]
This paper explores how different types of explanations collaboratively meet users' XAI needs.
We introduce the Intent Fulfilment Framework (IFF) for creating explanation experiences.
The Explanation Experience Dialogue Model integrates the IFF and "Explanation Followups" to provide users with a conversational interface.
arXiv Detail & Related papers (2024-05-16T21:13:43Z) - Explainability for Transparent Conversational Information-Seeking [13.790574266700006]
This study explores different methods of explaining the responses.
By exploring transparency across explanation type, quality, and presentation mode, this research aims to bridge the gap between system-generated responses and responses verifiable by the user.
arXiv Detail & Related papers (2024-05-06T09:25:14Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - What if you said that differently?: How Explanation Formats Affect Human Feedback Efficacy and User Perception [53.4840989321394]
We analyze the effect of rationales generated by QA models to support their answers.
We present users with incorrect answers and corresponding rationales in various formats.
We measure the effectiveness of this feedback in patching these rationales through in-context learning.
arXiv Detail & Related papers (2023-11-16T04:26:32Z) - Causal Discovery with Language Models as Imperfect Experts [119.22928856942292]
We consider how expert knowledge can be used to improve the data-driven identification of causal graphs.
We propose strategies for amending such expert knowledge based on consistency properties.
We report a case study, on real data, where a large language model is used as an imperfect expert.
arXiv Detail & Related papers (2023-07-05T16:01:38Z) - Interactive Explanation with Varying Level of Details in an Explainable
Scientific Literature Recommender System [0.5937476291232802]
We aim in this paper to adopt a user-centered, interactive explanation model that provides explanations with different levels of detail and empowers users to interact with, control, and personalize the explanations based on their needs and preferences.
We conducted a qualitative user study to investigate the impact of providing interactive explanations with varying level of details on the users' perception of the explainable RS.
arXiv Detail & Related papers (2023-06-09T10:48:04Z) - Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI [10.151828072611428]
Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions.
We tested the effects of counterfactual and causal explanations on the objective accuracy of users predictions.
We also found that users understand explanations referring to categorical features more readily than those referring to continuous features.
arXiv Detail & Related papers (2022-04-21T15:01:09Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Explainable Recommender Systems via Resolving Learning Representations [57.24565012731325]
Explanations could help improve user experience and discover system defects.
We propose a novel explainable recommendation model through improving the transparency of the representation learning process.
arXiv Detail & Related papers (2020-08-21T05:30:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.