Sensible AI: Re-imagining Interpretability and Explainability using
Sensemaking Theory
- URL: http://arxiv.org/abs/2205.05057v1
- Date: Tue, 10 May 2022 17:20:44 GMT
- Title: Sensible AI: Re-imagining Interpretability and Explainability using
Sensemaking Theory
- Authors: Harmanpreet Kaur, Eytan Adar, Eric Gilbert, Cliff Lampe
- Abstract summary: We propose an alternate framework for interpretability grounded in Weick's sensemaking theory.
We use an application of sensemaking in organizations as a template for discussing design guidelines for Sensible AI.
- Score: 14.35488479818285
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding how ML models work is a prerequisite for responsibly designing,
deploying, and using ML-based systems. With interpretability approaches, ML can
now offer explanations for its outputs to aid human understanding. Though these
approaches rely on guidelines for how humans explain things to each other, they
ultimately solve for improving the artifact -- an explanation. In this paper,
we propose an alternate framework for interpretability grounded in Weick's
sensemaking theory, which focuses on who the explanation is intended for.
Recent work has advocated for the importance of understanding stakeholders'
needs -- we build on this by providing concrete properties (e.g., identity,
social context, environmental cues, etc.) that shape human understanding. We
use an application of sensemaking in organizations as a template for discussing
design guidelines for Sensible AI, AI that factors in the nuances of human
cognition when trying to explain itself.
Related papers
- An Ontology-Enabled Approach For User-Centered and Knowledge-Enabled Explanations of AI Systems [0.3480973072524161]
Recent research in explainability has focused on explaining the workings of AI models or model explainability.
This thesis seeks to bridge some gaps between model and user-centered explainability.
arXiv Detail & Related papers (2024-10-23T02:03:49Z) - Evaluating the Utility of Model Explanations for Model Development [54.23538543168767]
We evaluate whether explanations can improve human decision-making in practical scenarios of machine learning model development.
To our surprise, we did not find evidence of significant improvement on tasks when users were provided with any of the saliency maps.
These findings suggest caution regarding the usefulness and potential for misunderstanding in saliency-based explanations.
arXiv Detail & Related papers (2023-12-10T23:13:23Z) - Forms of Understanding of XAI-Explanations [2.887772793510463]
This article aims to present a model of forms of understanding in the context of Explainable Artificial Intelligence (XAI)
Two types of understanding are considered as possible outcomes of explanations, namely enabledness and comprehension.
Special challenges of understanding in XAI are discussed.
arXiv Detail & Related papers (2023-11-15T08:06:51Z) - Interpreting Pretrained Language Models via Concept Bottlenecks [55.47515772358389]
Pretrained language models (PLMs) have made significant strides in various natural language processing tasks.
The lack of interpretability due to their black-box'' nature poses challenges for responsible implementation.
We propose a novel approach to interpreting PLMs by employing high-level, meaningful concepts that are easily understandable for humans.
arXiv Detail & Related papers (2023-11-08T20:41:18Z) - "Explanation" is Not a Technical Term: The Problem of Ambiguity in XAI [2.5899040911480173]
We explore the features of explanations and how to use those features in evaluating their utility.
We focus on the requirements for explanations defined by their functional role, the knowledge states of users who are trying to understand them, and the availability of the information needed to generate them.
arXiv Detail & Related papers (2022-06-27T21:42:53Z) - Rethinking Explainability as a Dialogue: A Practitioner's Perspective [57.87089539718344]
We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
arXiv Detail & Related papers (2022-02-03T22:17:21Z) - Human Interpretation of Saliency-based Explanation Over Text [65.29015910991261]
We study saliency-based explanations over textual data.
We find that people often mis-interpret the explanations.
We propose a method to adjust saliencies based on model estimates of over- and under-perception.
arXiv Detail & Related papers (2022-01-27T15:20:32Z) - Diagnosing AI Explanation Methods with Folk Concepts of Behavior [70.10183435379162]
We consider "success" to depend not only on what information the explanation contains, but also on what information the human explainee understands from it.
We use folk concepts of behavior as a framework of social attribution by the human explainee.
arXiv Detail & Related papers (2022-01-27T00:19:41Z) - Explainability Is in the Mind of the Beholder: Establishing the
Foundations of Explainable Artificial Intelligence [11.472707084860875]
We define explainability as (logical) reasoning applied to transparent insights (into black boxes) interpreted under certain background knowledge.
We revisit the trade-off between transparency and predictive power and its implications for ante-hoc and post-hoc explainers.
We discuss components of the machine learning workflow that may be in need of interpretability, building on a range of ideas from human-centred explainability.
arXiv Detail & Related papers (2021-12-29T09:21:33Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.