Explanations in Everyday Software Systems: Towards a Taxonomy for Explainability Needs
- URL: http://arxiv.org/abs/2404.16644v1
- Date: Thu, 25 Apr 2024 14:34:10 GMT
- Title: Explanations in Everyday Software Systems: Towards a Taxonomy for Explainability Needs
- Authors: Jakob Droste, Hannah Deters, Martin Obaidi, Kurt Schneider,
- Abstract summary: We present the results of an online survey with 84 participants.
We identified and classified 315 explainability needs from the survey answers.
We present two major contributions of this work.
- Score: 1.4503034354870523
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Modern software systems are becoming increasingly complex and opaque. The integration of explanations within software has shown the potential to address this opacity and can make the system more understandable to end-users. As a result, explainability has gained much traction as a non-functional requirement of complex systems. Understanding what type of system requires what types of explanations is necessary to facilitate the inclusion of explainability in early software design processes. In order to specify explainability requirements, an explainability taxonomy that applies to a variety of different software types is needed. In this paper, we present the results of an online survey with 84 participants. We asked the participants to state their questions and confusions concerning their three most recently used software systems and elicited both explicit and implicit explainability needs from their statements. These needs were coded by three researchers. In total, we identified and classified 315 explainability needs from the survey answers. Drawing from a large pool of explainability needs and our coding procedure, we present two major contributions of this work: 1) a taxonomy for explainability needs in everyday software systems and 2) an overview of how the need for explanations differs between different types of software systems.
Related papers
- Do Users' Explainability Needs in Software Change with Mood? [2.42509778995617]
We investigate the influence of a user's subjective mood and objective demographic aspects on explanation needs by means of frequency and type of explanation.
We conclude that the need for explanations is very subjective and does only partially depend on objective factors.
arXiv Detail & Related papers (2025-02-10T15:12:41Z) - Improving Complex Reasoning over Knowledge Graph with Logic-Aware Curriculum Tuning [89.89857766491475]
We propose a complex reasoning schema over KG upon large language models (LLMs)
We augment the arbitrary first-order logical queries via binary tree decomposition to stimulate the reasoning capability of LLMs.
Experiments across widely used datasets demonstrate that LACT has substantial improvements(brings an average +5.5% MRR score) over advanced methods.
arXiv Detail & Related papers (2024-05-02T18:12:08Z) - Explanation Needs in App Reviews: Taxonomy and Automated Detection [2.545133021829296]
We explore the need for explanation expressed by users in app reviews.
We manually coded a set of 1,730 app reviews from 8 apps and derived a taxonomy of Explanation Needs.
Our best classifier identifies Explanation Needs in 486 unseen reviews of 4 different apps with a weighted F-score of 86%.
arXiv Detail & Related papers (2023-07-10T06:48:01Z) - Explanation Selection Using Unlabeled Data for Chain-of-Thought
Prompting [80.9896041501715]
Explanations that have not been "tuned" for a task, such as off-the-shelf explanations written by nonexperts, may lead to mediocre performance.
This paper tackles the problem of how to optimize explanation-infused prompts in a blackbox fashion.
arXiv Detail & Related papers (2023-02-09T18:02:34Z) - Towards a Holistic Understanding of Mathematical Questions with
Contrastive Pre-training [65.10741459705739]
We propose a novel contrastive pre-training approach for mathematical question representations, namely QuesCo.
We first design two-level question augmentations, including content-level and structure-level, which generate literally diverse question pairs with similar purposes.
Then, to fully exploit hierarchical information of knowledge concepts, we propose a knowledge hierarchy-aware rank strategy.
arXiv Detail & Related papers (2023-01-18T14:23:29Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Brain-inspired Search Engine Assistant based on Knowledge Graph [53.89429854626489]
DeveloperBot is a brain-inspired search engine assistant named on knowledge graph.
It constructs a multi-layer query graph by splitting a complex multi-constraint query into several ordered constraints.
It then models the constraint reasoning process as subgraph search process inspired by the spreading activation model of cognitive science.
arXiv Detail & Related papers (2020-12-25T06:36:11Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - A systematic review and taxonomy of explanations in decision support and
recommender systems [13.224071661974596]
We systematically review the literature on explanations in advice-giving systems.
We derive a novel comprehensive taxonomy of aspects to be considered when designing explanation facilities.
arXiv Detail & Related papers (2020-06-15T18:19:20Z) - Directions for Explainable Knowledge-Enabled Systems [3.7250420821969827]
We leverage our survey of explanation literature in Artificial Intelligence and closely related fields to generate a set of explanation types.
We define each type and provide an example question that would motivate the need for this style of explanation.
We believe this set of explanation types will help future system designers in their generation and prioritization of requirements.
arXiv Detail & Related papers (2020-03-17T04:34:29Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.