Rethinking Explainability as a Dialogue: A Practitioner's Perspective
- URL: http://arxiv.org/abs/2202.01875v1
- Date: Thu, 3 Feb 2022 22:17:21 GMT
- Title: Rethinking Explainability as a Dialogue: A Practitioner's Perspective
- Authors: Himabindu Lakkaraju, Dylan Slack, Yuxin Chen, Chenhao Tan, Sameer
Singh
- Abstract summary: We ask doctors, healthcare professionals, and policymakers about their needs and desires for explanations.
Our study indicates that decision-makers would strongly prefer interactive explanations in the form of natural language dialogues.
Considering these needs, we outline a set of five principles researchers should follow when designing interactive explanations.
- Score: 57.87089539718344
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As practitioners increasingly deploy machine learning models in critical
domains such as health care, finance, and policy, it becomes vital to ensure
that domain experts function effectively alongside these models. Explainability
is one way to bridge the gap between human decision-makers and machine learning
models. However, most of the existing work on explainability focuses on
one-off, static explanations like feature importances or rule lists. These
sorts of explanations may not be sufficient for many use cases that require
dynamic, continuous discovery from stakeholders. In the literature, few works
ask decision-makers about the utility of existing explanations and other
desiderata they would like to see in an explanation going forward. In this
work, we address this gap and carry out a study where we interview doctors,
healthcare professionals, and policymakers about their needs and desires for
explanations. Our study indicates that decision-makers would strongly prefer
interactive explanations in the form of natural language dialogues. Domain
experts wish to treat machine learning models as "another colleague", i.e., one
who can be held accountable by asking why they made a particular decision
through expressive and accessible natural language interactions. Considering
these needs, we outline a set of five principles researchers should follow when
designing interactive explanations as a starting place for future work.
Further, we show why natural language dialogues satisfy these principles and
are a desirable way to build interactive explanations. Next, we provide a
design of a dialogue system for explainability and discuss the risks,
trade-offs, and research opportunities of building these systems. Overall, we
hope our work serves as a starting place for researchers and engineers to
design interactive explainability systems.
Related papers
- From Feature Importance to Natural Language Explanations Using LLMs with RAG [4.204990010424084]
We introduce traceable question-answering, leveraging an external knowledge repository to inform responses of Large Language Models (LLMs)
This knowledge repository comprises contextual details regarding the model's output, containing high-level features, feature importance, and alternative probabilities.
We integrate four key characteristics - social, causal, selective, and contrastive - drawn from social science research on human explanations into a single-shot prompt, guiding the response generation process.
arXiv Detail & Related papers (2024-07-30T17:27:20Z) - DiPlomat: A Dialogue Dataset for Situated Pragmatic Reasoning [89.92601337474954]
Pragmatic reasoning plays a pivotal role in deciphering implicit meanings that frequently arise in real-life conversations.
We introduce a novel challenge, DiPlomat, aiming at benchmarking machines' capabilities on pragmatic reasoning and situated conversational understanding.
arXiv Detail & Related papers (2023-06-15T10:41:23Z) - Towards Large-Scale Interpretable Knowledge Graph Reasoning for Dialogue
Systems [109.16553492049441]
We propose a novel method to incorporate the knowledge reasoning capability into dialogue systems in a more scalable and generalizable manner.
To the best of our knowledge, this is the first work to have transformer models generate responses by reasoning over differentiable knowledge graphs.
arXiv Detail & Related papers (2022-03-20T17:51:49Z) - Advances in Multi-turn Dialogue Comprehension: A Survey [51.215629336320305]
We review the previous methods from the perspective of dialogue modeling.
We discuss three typical patterns of dialogue modeling that are widely-used in dialogue comprehension tasks.
arXiv Detail & Related papers (2021-03-04T15:50:17Z) - Counterfactual Explanations for Machine Learning: A Review [5.908471365011942]
We review and categorize research on counterfactual explanations in machine learning.
Modern approaches to counterfactual explainability in machine learning draw connections to the established legal doctrine in many countries.
arXiv Detail & Related papers (2020-10-20T20:08:42Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z) - The Grammar of Interactive Explanatory Model Analysis [7.812073412066698]
We show how different Explanatory Model Analysis (EMA) methods complement each other.
We formalize the grammar of IEMA to describe potential human-model dialogues.
IEMA is implemented in a widely used human-centered open-source software framework.
arXiv Detail & Related papers (2020-05-01T17:12:22Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z) - One Explanation Does Not Fit All: The Promise of Interactive
Explanations for Machine Learning Transparency [21.58324172085553]
We discuss the promises of Interactive Machine Learning for improved transparency of black-box systems.
We show how to personalise counterfactual explanations by interactively adjusting their conditional statements.
We argue that adjusting the explanation itself and its content is more important.
arXiv Detail & Related papers (2020-01-27T13:10:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.