Adding Why to What? Analyses of an Everyday Explanation
- URL: http://arxiv.org/abs/2308.04187v1
- Date: Tue, 8 Aug 2023 11:17:22 GMT
- Title: Adding Why to What? Analyses of an Everyday Explanation
- Authors: Lutz Terfloth, Michael Schaffer, Heike M. Buhl, Carsten Schulte
- Abstract summary: We investigated 20 game explanations using the theory as an analytical framework.
We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance.
Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In XAI it is important to consider that, in contrast to explanations for
professional audiences, one cannot assume common expertise when explaining for
laypeople. But such explanations between humans vary greatly, making it
difficult to research commonalities across explanations. We used the dual
nature theory, a techno-philosophical approach, to cope with these challenges.
According to it, one can explain, for example, an XAI's decision by addressing
its dual nature: by focusing on the Architecture (e.g., the logic of its
algorithms) or the Relevance (e.g., the severity of a decision, the
implications of a recommendation). We investigated 20 game explanations using
the theory as an analytical framework. We elaborate how we used the theory to
quickly structure and compare explanations of technological artifacts. We
supplemented results from analyzing the explanation contents with results from
a video recall to explore how explainers justified their explanation. We found
that explainers were focusing on the physical aspects of the game first
(Architecture) and only later on aspects of the Relevance. Reasoning in the
video recalls indicated that EX regarded the focus on the Architecture as
important for structuring the explanation initially by explaining the basic
components before focusing on more complex, intangible aspects. Shifting
between addressing the two sides was justified by explanation goals, emerging
misunderstandings, and the knowledge needs of the explainee. We discovered
several commonalities that inspire future research questions which, if further
generalizable, provide first ideas for the construction of synthetic
explanations.
Related papers
- Understanding XAI Through the Philosopher's Lens: A Historical Perspective [5.839350214184222]
We show that a gradual progression has independently occurred in both domains from logicaldeductive to statistical models of explanation.
Similar concepts have independently emerged in both such as, for example, the relation between explanation and understanding and the importance of pragmatic factors.
arXiv Detail & Related papers (2024-07-26T14:44:49Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - The role of causality in explainable artificial intelligence [1.049712834719005]
Causality and eXplainable Artificial Intelligence (XAI) have developed as separate fields in computer science.
We investigate the literature to try to understand how and to what extent causality and XAI are intertwined.
arXiv Detail & Related papers (2023-09-18T16:05:07Z) - A Theoretical Framework for AI Models Explainability with Application in
Biomedicine [3.5742391373143474]
We propose a novel definition of explanation that is a synthesis of what can be found in the literature.
We fit explanations into the properties of faithfulness (i.e., the explanation being a true description of the model's inner workings and decision-making process) and plausibility (i.e., how much the explanation looks convincing to the user)
arXiv Detail & Related papers (2022-12-29T20:05:26Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Towards Relatable Explainable AI with the Perceptual Process [5.581885362337179]
We argue that explanations must be more relatable to other concepts, hypotheticals, and associations.
Inspired by cognitive psychology, we propose the XAI Perceptual Processing Framework and RexNet model for relatable explainable AI.
arXiv Detail & Related papers (2021-12-28T05:48:53Z) - The Who in XAI: How AI Background Shapes Perceptions of AI Explanations [61.49776160925216]
We conduct a mixed-methods study of how two different groups--people with and without AI background--perceive different types of AI explanations.
We find that (1) both groups showed unwarranted faith in numbers for different reasons and (2) each group found value in different explanations beyond their intended design.
arXiv Detail & Related papers (2021-07-28T17:32:04Z) - Generating Commonsense Explanation by Extracting Bridge Concepts from
Reasoning Paths [128.13034600968257]
We propose a method that first extracts the underlying concepts which are served as textitbridges in the reasoning chain.
To facilitate the reasoning process, we utilize external commonsense knowledge to build the connection between a statement and the bridge concepts.
We design a bridge concept extraction model that first scores the triples, routes the paths in the subgraph, and further selects bridge concepts with weak supervision.
arXiv Detail & Related papers (2020-09-24T15:27:20Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.