Forms of Understanding of XAI-Explanations
- URL: http://arxiv.org/abs/2311.08760v1
- Date: Wed, 15 Nov 2023 08:06:51 GMT
- Title: Forms of Understanding of XAI-Explanations
- Authors: Hendrik Buschmeier, Heike M. Buhl, Friederike Kern, Angela Grimminger,
Helen Beierling, Josephine Fisher, Andr\'e Gro{\ss}, Ilona Horwath, Nils
Klowait, Stefan Lazarov, Michael Lenke, Vivien Lohmer, Katharina Rohlfing,
Ingrid Scharlau, Amit Singh, Lutz Terfloth, Anna-Lisa Vollmer, Yu Wang,
Annedore Wilmes, Britta Wrede
- Abstract summary: This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
- Score: 2.887772793510463
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Explainability has become an important topic in computer science and
artificial intelligence, leading to a subfield called Explainable Artificial
Intelligence (XAI). The goal of providing or seeking explanations is to achieve
(better) 'understanding' on the part of the explainee. However, what it means
to 'understand' is still not clearly defined, and the concept itself is rarely
the subject of scientific investigation. This conceptual article aims to
present a model of forms of understanding in the context of XAI and beyond.
From an interdisciplinary perspective bringing together computer science,
linguistics, sociology, and psychology, a definition of understanding and its
forms, assessment, and dynamics during the process of giving everyday
explanations are explored. Two types of understanding are considered as
possible outcomes of explanations, namely enabledness, 'knowing how' to do or
decide something, and comprehension, 'knowing that' -- both in different
degrees (from shallow to deep). Explanations regularly start with shallow
understanding in a specific domain and can lead to deep comprehension and
enabledness of the explanandum, which we see as a prerequisite for human users
to gain agency. In this process, the increase of comprehension and enabledness
are highly interdependent. Against the background of this systematization,
special challenges of understanding in XAI are discussed.
Related papers
- Explainers' Mental Representations of Explainees' Needs in Everyday Explanations [0.0]
In explanations, explainers have mental representations of explainees' developing knowledge and shifting interests regarding the explanandum.
XAI should be able to react to explainees' needs in a similar manner.
This study investigated explainers' mental representations in everyday explanations of technological artifacts.
arXiv Detail & Related papers (2024-11-13T10:53:07Z) - Adding Why to What? Analyses of an Everyday Explanation [0.0]
We investigated 20 game explanations using the theory as an analytical framework.
We found that explainers were focusing on the physical aspects of the game first (Architecture) and only later on aspects of the Relevance.
Shifting between addressing the two sides was justified by explanation goals, emerging misunderstandings, and the knowledge needs of the explainee.
arXiv Detail & Related papers (2023-08-08T11:17:22Z) - Explaining Explainability: Towards Deeper Actionable Insights into Deep
Learning through Second-order Explainability [70.60433013657693]
Second-order explainable AI (SOXAI) was recently proposed to extend explainable AI (XAI) from the instance level to the dataset level.
We demonstrate for the first time, via example classification and segmentation cases, that eliminating irrelevant concepts from the training set based on actionable insights from SOXAI can enhance a model's performance.
arXiv Detail & Related papers (2023-06-14T23:24:01Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - A Means-End Account of Explainable Artificial Intelligence [0.0]
XAI seeks opaque explanations for machine learning methods.
Authors disagree on what should be explained (topic), whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal)
arXiv Detail & Related papers (2022-08-09T09:57:42Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Explainability Is in the Mind of the Beholder: Establishing the
Foundations of Explainable Artificial Intelligence [11.472707084860875]
We define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge.
We revisit the trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers.
We discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability.
arXiv Detail & Related papers (2021-12-29T09:21:33Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Understanding understanding: a renormalization group inspired model of
(artificial) intelligence [0.0]
This paper is about the meaning of understanding in scientific and in artificial intelligent systems.
We give a mathematical definition of the understanding, where, contrary to the common wisdom, we define the probability space on the input set.
We show, how scientific understanding fits into this framework, and demonstrate, what is the difference between a scientific task and pattern recognition.
arXiv Detail & Related papers (2020-10-26T11:11:46Z) - Explainability in Deep Reinforcement Learning [68.8204255655161]
We review recent works in the direction to attain Explainable Reinforcement Learning (XRL)
In critical situations where it is essential to justify and explain the agent's behaviour, better explainability and interpretability of RL models could help gain scientific insight on the inner workings of what is still considered a black box.
arXiv Detail & Related papers (2020-08-15T10:11:42Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.