Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
- URL: http://arxiv.org/abs/2106.01503v1
- Date: Wed, 2 Jun 2021 23:17:29 GMT
- Title: Towards an Explanation Space to Align Humans and Explainable-AI Teamwork
- Authors: Garrick Cabour, Andr\'es Morales, \'Elise Ledoux, Samuel Bassetto
- Abstract summary: This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective.
The architecture comprises five intertwined components to outline explanation requirements for a task.
We present the Abstracted Explanation Space, a modeling tool that aggregates the architecture's components to support designers.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Providing meaningful and actionable explanations to end-users is a
fundamental prerequisite for implementing explainable intelligent systems in
the real world. Explainability is a situated interaction between a user and the
AI system rather than being static design principles. The content of
explanations is context-dependent and must be defined by evidence about the
user and its context. This paper seeks to operationalize this concept by
proposing a formative architecture that defines the explanation space from a
user-inspired perspective. The architecture comprises five intertwined
components to outline explanation requirements for a task: (1) the end-users
mental models, (2) the end-users cognitive process, (3) the user interface, (4)
the human-explainer agent, and the (5) agent process. We first define each
component of the architecture. Then we present the Abstracted Explanation
Space, a modeling tool that aggregates the architecture's components to support
designers in systematically aligning explanations with the end-users work
practices, needs, and goals. It guides the specifications of what needs to be
explained (content - end-users mental model), why this explanation is necessary
(context - end-users cognitive process), to delimit how to explain it (format -
human-explainer agent and user interface), and when should the explanations be
given. We then exemplify the tool's use in an ongoing case study in the
aircraft maintenance domain. Finally, we discuss possible contributions of the
tool, known limitations/areas for improvement, and future work to be done.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Exploring Effectiveness of Explanations for Appropriate Trust: Lessons
from Cognitive Psychology [3.1945067016153423]
This work draws inspiration from findings in cognitive psychology to understand how effective explanations can be designed.
We identify four components to which explanation designers can pay special attention: perception, semantics, intent, and user & context.
We propose that the significant challenge for effective AI explanations is an additional step between explanation generation using algorithms not producing interpretable explanations and explanation communication.
arXiv Detail & Related papers (2022-10-05T13:40:01Z) - Sensible AI: Re-imagining Interpretability and Explainability using
Sensemaking Theory [14.35488479818285]
We propose an alternate framework for interpretability grounded in Weick's sensemaking theory.
We use an application of sensemaking in organizations as a template for discussing design guidelines for Sensible AI.
arXiv Detail & Related papers (2022-05-10T17:20:44Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Explanation as a process: user-centric construction of multi-level and
multi-modal explanations [0.34410212782758043]
We present a process-based approach that combines multi-level and multi-modal explanations.
We use Inductive Logic Programming, an interpretable machine learning approach, to learn a comprehensible model.
arXiv Detail & Related papers (2021-10-07T19:26:21Z) - Explanation Ontology: A Model of Explanations for User-Centered AI [3.1783442097247345]
Explanations have often added to an AI system in a non-principled, post-hoc manner.
With greater adoption of these systems and emphasis on user-centric explainability, there is a need for a structured representation that treats explainability as a primary consideration.
We design an explanation ontology to model both the role of explanations, accounting for the system and user attributes in the process, and the range of different literature-derived explanation types.
arXiv Detail & Related papers (2020-10-04T03:53:35Z) - A Diagnostic Study of Explainability Techniques for Text Classification [52.879658637466605]
We develop a list of diagnostic properties for evaluating existing explainability techniques.
We compare the saliency scores assigned by the explainability techniques with human annotations of salient input regions to find relations between a model's performance and the agreement of its rationales with human ones.
arXiv Detail & Related papers (2020-09-25T12:01:53Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - A Neural Topical Expansion Framework for Unstructured Persona-oriented
Dialogue Generation [52.743311026230714]
Persona Exploration and Exploitation (PEE) is able to extend the predefined user persona description with semantically correlated content.
PEE consists of two main modules: persona exploration and persona exploitation.
Our approach outperforms state-of-the-art baselines in terms of both automatic and human evaluations.
arXiv Detail & Related papers (2020-02-06T08:24:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.