Principles of Explanation in Human-AI Systems
- URL: http://arxiv.org/abs/2102.04972v1
- Date: Tue, 9 Feb 2021 17:43:45 GMT
- Title: Principles of Explanation in Human-AI Systems
- Authors: Shane T. Mueller, Elizabeth S. Veinott, Robert R. Hoffman, Gary Klein,
Lamia Alam, Tauseef Mamun, and William J. Clancey
- Abstract summary: Explainable Artificial Intelligence (XAI) has re-emerged in response to the development of modern AI and ML systems.
XAI systems are frequently algorithm-focused; starting and ending with an algorithm that implements a basic untested idea about explainability.
We propose an alternative: to start with human-focused principles for the design, testing, and implementation of XAI systems.
- Score: 0.7768952514701895
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Explainable Artificial Intelligence (XAI) has re-emerged in response to the
development of modern AI and ML systems. These systems are complex and
sometimes biased, but they nevertheless make decisions that impact our lives.
XAI systems are frequently algorithm-focused; starting and ending with an
algorithm that implements a basic untested idea about explainability. These
systems are often not tested to determine whether the algorithm helps users
accomplish any goals, and so their explainability remains unproven. We propose
an alternative: to start with human-focused principles for the design, testing,
and implementation of XAI systems, and implement algorithms to serve that
purpose. In this paper, we review some of the basic concepts that have been
used for user-centered XAI systems over the past 40 years of research. Based on
these, we describe the "Self-Explanation Scorecard", which can help developers
understand how they can empower users by enabling self-explanation. Finally, we
present a set of empirically-grounded, user-centered design principles that may
guide developers to create successful explainable systems.
Related papers
- Artificial General Intelligence (AGI)-Native Wireless Systems: A Journey Beyond 6G [58.440115433585824]
Building future wireless systems that support services like digital twins (DTs) is challenging to achieve through advances to conventional technologies like meta-surfaces.
While artificial intelligence (AI)-native networks promise to overcome some limitations of wireless technologies, developments still rely on AI tools like neural networks.
This paper revisits the concept of AI-native wireless systems, equipping them with the common sense necessary to transform them into artificial general intelligence (AGI)-native systems.
arXiv Detail & Related papers (2024-04-29T04:51:05Z) - Towards Reconciling Usability and Usefulness of Explainable AI
Methodologies [2.715884199292287]
Black-box AI systems can lead to liability and accountability issues when they produce an incorrect decision.
Explainable AI (XAI) seeks to bridge the knowledge gap, between developers and end-users.
arXiv Detail & Related papers (2023-01-13T01:08:49Z) - Seamful XAI: Operationalizing Seamful Design in Explainable AI [59.89011292395202]
Mistakes in AI systems are inevitable, arising from both technical limitations and sociotechnical gaps.
We propose that seamful design can foster AI explainability by revealing sociotechnical and infrastructural mismatches.
We explore this process with 43 AI practitioners and real end-users.
arXiv Detail & Related papers (2022-11-12T21:54:05Z) - Towards Human Cognition Level-based Experiment Design for Counterfactual
Explanations (XAI) [68.8204255655161]
The emphasis of XAI research appears to have turned to a more pragmatic explanation approach for better understanding.
An extensive area where cognitive science research may substantially influence XAI advancements is evaluating user knowledge and feedback.
We propose a framework to experiment with generating and evaluating the explanations on the grounds of different cognitive levels of understanding.
arXiv Detail & Related papers (2022-10-31T19:20:22Z) - Transcending XAI Algorithm Boundaries through End-User-Inspired Design [27.864338632191608]
Lacking explainability-focused functional support for end users may hinder the safe and responsible use of AI in high-stakes domains.
Our work shows that grounding the technical problem in end users' use of XAI can inspire new research questions.
Such end-user-inspired research questions have the potential to promote social good by democratizing AI and ensuring the responsible use of AI in critical domains.
arXiv Detail & Related papers (2022-08-18T09:44:51Z) - Responsible-AI-by-Design: a Pattern Collection for Designing Responsible
AI Systems [12.825892132103236]
Many ethical regulations, principles, and guidelines for responsible AI have been issued recently.
This paper identifies one missing element as the system-level guidance: how to design the architecture of responsible AI systems.
We present a summary of design patterns that can be embedded into the AI systems as product features to contribute to responsible-AI-by-design.
arXiv Detail & Related papers (2022-03-02T07:30:03Z) - A User-Centred Framework for Explainable Artificial Intelligence in
Human-Robot Interaction [70.11080854486953]
We propose a user-centred framework for XAI that focuses on its social-interactive aspect.
The framework aims to provide a structure for interactive XAI solutions thought for non-expert users.
arXiv Detail & Related papers (2021-09-27T09:56:23Z) - The MineRL BASALT Competition on Learning from Human Feedback [58.17897225617566]
The MineRL BASALT competition aims to spur forward research on this important class of techniques.
We design a suite of four tasks in Minecraft for which we expect it will be hard to write down hardcoded reward functions.
We provide a dataset of human demonstrations on each of the four tasks, as well as an imitation learning baseline.
arXiv Detail & Related papers (2021-07-05T12:18:17Z) - Explainable Artificial Intelligence (XAI) for Increasing User Trust in
Deep Reinforcement Learning Driven Autonomous Systems [0.8701566919381223]
We offer an explainable artificial intelligence (XAI) framework that provides a three-fold explanation.
We created a user-interface for our XAI framework and evaluated its efficacy via a human-user experiment.
arXiv Detail & Related papers (2021-06-07T16:38:43Z) - A general framework for scientifically inspired explanations in AI [76.48625630211943]
We instantiate the concept of structure of scientific explanation as the theoretical underpinning for a general framework in which explanations for AI systems can be implemented.
This framework aims to provide the tools to build a "mental-model" of any AI system so that the interaction with the user can provide information on demand and be closer to the nature of human-made explanations.
arXiv Detail & Related papers (2020-03-02T10:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.