Explainers' Mental Representations of Explainees' Needs in Everyday Explanations
- URL: http://arxiv.org/abs/2411.08514v1
- Date: Wed, 13 Nov 2024 10:53:07 GMT
- Title: Explainers' Mental Representations of Explainees' Needs in Everyday Explanations
- Authors: Michael Erol Schaffer, Lutz Terfloth, Carsten Schulte, Heike M. Buhl,
- Abstract summary: In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum.
XAI should be able to react to explainees' needs in a similar manner.
This study investigated explainers' mental representations in everyday explanations of technological artifacts.
- Score: 0.0
- License:
- Abstract: In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum. These mental representations are dynamic in nature and develop over time, thereby enabling explainers to react to explainees' needs by adapting and customizing the explanation. XAI should be able to react to explainees' needs in a similar manner. Therefore, a component that incorporates aspects of explainers' mental representations of explainees is required. In this study, we took first steps by investigating explainers' mental representations in everyday explanations of technological artifacts. According to the dual nature theory, technological artifacts require explanations with two distinct perspectives, namely observable and measurable features addressing "Architecture" or interpretable aspects addressing "Relevance". We conducted extended semi structured pre-, post- and video recall-interviews with explainers (N=9) in the context of an explanation. The transcribed interviews were analyzed utilizing qualitative content analysis. The explainers' answers regarding the explainees' knowledge and interests with regard to the technological artifact emphasized the vagueness of early assumptions of explainers toward strong beliefs in the course of explanations. The assumed knowledge of explainees in the beginning is centered around Architecture and develops toward knowledge with regard to both Architecture and Relevance. In contrast, explainers assumed higher interests in Relevance in the beginning to interests regarding both Architecture and Relevance in the further course of explanations. Further, explainers often finished the explanation despite their perception that explainees still had gaps in knowledge. These findings are transferred into practical implications relevant for user models for adaptive explainable systems.
Related papers
- Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - Adding Why to What? Analyses of an Everyday Explanation [0.0]
We investigated 20 game explanations using the theory as an analytical framework.
We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance.
Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee.
arXiv Detail & Related papers (2023-08-08T11:17:22Z) - NELLIE: A Neuro-Symbolic Inference Engine for Grounded, Compositional, and Explainable Reasoning [59.16962123636579]
This paper proposes a new take on Prolog-based inference engines.
We replace handcrafted rules with a combination of neural language modeling, guided generation, and semi dense retrieval.
Our implementation, NELLIE, is the first system to demonstrate fully interpretable, end-to-end grounded QA.
arXiv Detail & Related papers (2022-09-16T00:54:44Z) - Scientific Explanation and Natural Language: A Unified
Epistemological-Linguistic Perspective for Explainable AI [2.7920304852537536]
This paper focuses on the scientific domain, aiming to bridge the gap between theory and practice on the notion of a scientific explanation.
Through a mixture of quantitative and qualitative methodologies, the presented study allows deriving the following main conclusions.
arXiv Detail & Related papers (2022-05-03T22:31:42Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Explainability Is in the Mind of the Beholder: Establishing the
Foundations of Explainable Artificial Intelligence [11.472707084860875]
We define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge.
We revisit the trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers.
We discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability.
arXiv Detail & Related papers (2021-12-29T09:21:33Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - The Struggles of Feature-Based Explanations: Shapley Values vs. Minimal
Sufficient Subsets [61.66584140190247]
We show that feature-based explanations pose problems even for explaining trivial models.
We show that two popular classes of explainers, Shapley explainers and minimal sufficient subsets explainers, target fundamentally different types of ground-truth explanations.
arXiv Detail & Related papers (2020-09-23T09:45:23Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.