Do Users' Explainability Needs in Software Change with Mood?
- URL: http://arxiv.org/abs/2502.06546v1
- Date: Mon, 10 Feb 2025 15:12:41 GMT
- Title: Do Users' Explainability Needs in Software Change with Mood?
- Authors: Martin Obaidi, Jakob Droste, Hannah Deters, Marc Herrmann, Jil Klünder, Kurt Schneider,
- Abstract summary: We investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation.<n>We conclude that the need for explanations is very subjective and does only partially depend on objective factors.
- Score: 2.42509778995617
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Context and Motivation: The increasing complexity of modern software systems often challenges users' abilities to interact with them. Taking established quality attributes such as usability and transparency into account can mitigate this problem, but often do not suffice to completely solve it. Recently, explainability has emerged as essential non-functional requirement to help overcome the aforementioned difficulties. Question/problem: User preferences regarding the integration of explanations in software differ. Neither too few nor too many explanations are helpful. In this paper, we investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation. Principal ideas/results: Our results reveal a limited relationship between these factors and explanation needs. Two significant correlations were identified: Emotional reactivity was positively correlated with the need for UI explanations, while a negative correlation was found between age and user interface needs. Contribution: As we only find very few significant aspects that influence the need for explanations, we conclude that the need for explanations is very subjective and does only partially depend on objective factors. These findings emphasize the necessity for software companies to actively gather user-specific explainability requirements to address diverse and context-dependent user demands. Nevertheless, future research should explore additional personal traits and cross-cultural factors to inform the development of adaptive, user-centered explanation systems.
Related papers
- Interactive Reasoning: Visualizing and Controlling Chain-of-Thought Reasoning in Large Language Models [54.85405423240165]
We introduce Interactive Reasoning, an interaction design that visualizes chain-of-thought outputs as a hierarchy of topics.<n>We implement interactive reasoning in Hippo, a prototype for AI-assisted decision making in the face of uncertain trade-offs.
arXiv Detail & Related papers (2025-06-30T10:00:43Z) - See What I Mean? CUE: A Cognitive Model of Understanding Explanations [12.230507748153459]
We propose a model for Cognitive Understanding of Explanations, linking explanation properties to cognitive sub-processes.<n>In a study we found comparable task performance but lower confidence/effort for visually impaired users.<n>We contribute: (1) a formalized cognitive model for explanation understanding, (2) an integrated definition of human-centered explanation properties, and (3) empirical evidence motivating accessible, user-tailored XAI.
arXiv Detail & Related papers (2025-05-09T22:05:20Z) - Predicting Satisfaction of Counterfactual Explanations from Human Ratings of Explanatory Qualities [0.873811641236639]
We analyze a dataset of counterfactual explanations that were evaluated by 206 human participants.
We find that feasibility and trust stand out as the strongest predictors of user satisfaction.
Other metrics explain 58% of the variance, highlighting the importance of additional explanatory qualities.
arXiv Detail & Related papers (2025-04-07T11:09:25Z) - How Does Users' App Knowledge Influence the Preferred Level of Detail and Format of Software Explanations? [2.423517761302909]
This study investigates factors influencing users' preferred level of detail and the form of an explanation in software.<n>Results indicate that users prefer moderately detailed explanations in short text formats.<n>Our results show that explanation preferences are weakly influenced by app-specific knowledge but shaped by demographic and psychological factors.
arXiv Detail & Related papers (2025-02-10T15:18:04Z) - Explanations in Everyday Software Systems: Towards a Taxonomy for Explainability Needs [1.4503034354870523]
We present the results of an online survey with 84 participants.
We identified and classified 315 explainability needs from the survey answers.
We present two major contributions of this work.
arXiv Detail & Related papers (2024-04-25T14:34:10Z) - How are Prompts Different in Terms of Sensitivity? [50.67313477651395]
We present a comprehensive prompt analysis based on the sensitivity of a function.
We use gradient-based saliency scores to empirically demonstrate how different prompts affect the relevance of input tokens to the output.
We introduce sensitivity-aware decoding which incorporates sensitivity estimation as a penalty term in the standard greedy decoding.
arXiv Detail & Related papers (2023-11-13T10:52:01Z) - Reason to explain: Interactive contrastive explanations (REASONX) [5.156484100374058]
We present REASONX, an explanation tool based on Constraint Logic Programming (CLP)
REASONX provides interactive contrastive explanations that can be augmented by background knowledge.
It computes factual and constrative decision rules, as well as closest constrative examples.
arXiv Detail & Related papers (2023-05-29T15:13:46Z) - Understanding How People Rate Their Conversations [73.17730062864314]
We conduct a study to better understand how people rate their interactions with conversational agents.
We focus on agreeableness and extraversion as variables that may explain variation in ratings.
arXiv Detail & Related papers (2022-06-01T00:45:32Z) - Textual Explanations and Critiques in Recommendation Systems [8.406549970145846]
dissertation focuses on two fundamental challenges of addressing this need.
The first involves explanation generation in a scalable and data-driven manner.
The second challenge consists in making explanations actionable, and we refer to it as critiquing.
arXiv Detail & Related papers (2022-05-15T11:59:23Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Explainability in Music Recommender Systems [69.0506502017444]
We discuss how explainability can be addressed in the context of Music Recommender Systems (MRSs)
MRSs are often quite complex and optimized for recommendation accuracy.
We show how explainability components can be integrated within a MRS and in what form explanations can be provided.
arXiv Detail & Related papers (2022-01-25T18:32:11Z) - Competency Problems: On Finding and Removing Artifacts in Language Data [50.09608320112584]
We argue that for complex language understanding tasks, all simple feature correlations are spurious.
We theoretically analyze the difficulty of creating data for competency problems when human bias is taken into account.
arXiv Detail & Related papers (2021-04-17T21:34:10Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z) - SQuINTing at VQA Models: Introspecting VQA Models with Sub-Questions [66.86887670416193]
We show that state-of-the-art VQA models have comparable performance in answering perception and reasoning questions, but suffer from consistency problems.
To address this shortcoming, we propose an approach called Sub-Question-aware Network Tuning (SQuINT)
We show that SQuINT improves model consistency by 5%, also marginally improving performance on the Reasoning questions in VQA, while also displaying better attention maps.
arXiv Detail & Related papers (2020-01-20T01:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.